From Dense to Sparse: Contrastive Pruning for Better Pre-trained Language Model Compression.
Runxin XuFuli LuoChengyu WangBaobao ChangJun HuangSongfang HuangFei HuangPublished in: AAAI (2022)
Keyphrases
- language model
- pre trained
- language modeling
- training data
- probabilistic model
- n gram
- information retrieval
- speech recognition
- document retrieval
- retrieval model
- query expansion
- test collection
- context sensitive
- training examples
- mixture model
- control signals
- translation model
- ad hoc information retrieval
- high dimensional
- sparse representation
- information retrieval systems
- multimedia
- learning algorithm