One Teacher is Enough? Pre-trained Language Model Distillation from Multiple Teachers.
Chuhan WuFangzhao WuYongfeng HuangPublished in: ACL/IJCNLP (Findings) (2021)
Keyphrases
- language model
- professional development
- pre trained
- language modeling
- n gram
- speech recognition
- learning process
- information retrieval
- document retrieval
- probabilistic model
- smoothing methods
- query expansion
- retrieval model
- test collection
- ad hoc information retrieval
- mixture model
- context sensitive
- translation model
- training examples
- topic models
- unsupervised learning
- relevance feedback
- hidden markov models
- relevance model
- neural network