Enabling Lightweight Fine-tuning for Pre-trained Language Model Compression based on Matrix Product Operators.
Peiyu LiuZe-Feng GaoWayne Xin ZhaoZhi-Yuan XieZhong-Yi LuJi-Rong WenPublished in: ACL/IJCNLP (1) (2021)
Keyphrases
- lightweight
- language model
- fine tuning
- pre trained
- language modeling
- n gram
- speech recognition
- training data
- probabilistic model
- document retrieval
- training examples
- information retrieval
- retrieval model
- test collection
- wireless sensor networks
- query expansion
- smoothing methods
- mixture model
- control signals
- ad hoc information retrieval
- context sensitive
- active learning
- clustering method
- small number
- statistical model
- similarity measure
- neural network