From Dense to Sparse: Contrastive Pruning for Better Pre-trained Language Model Compression.
Runxin XuFuli LuoChengyu WangBaobao ChangJun HuangSongfang HuangFei HuangPublished in: CoRR (2021)
Keyphrases
- language model
- pre trained
- language modeling
- training data
- information retrieval
- speech recognition
- n gram
- training examples
- context sensitive
- probabilistic model
- high dimensional
- query expansion
- document retrieval
- ad hoc information retrieval
- retrieval model
- test collection
- mixture model
- control signals
- translation model
- sparse representation
- neural network
- smoothing methods
- computer vision
- data points
- unsupervised learning
- small number