MLKD-BERT: Multi-level Knowledge Distillation for Pre-trained Language Models.
Ying ZhangZiheng YangShufan JiPublished in: CoRR (2024)
Keyphrases
- language model
- language modeling
- pre trained
- n gram
- information retrieval
- speech recognition
- document retrieval
- language modelling
- query expansion
- probabilistic model
- smoothing methods
- retrieval model
- language models for information retrieval
- neural network
- prior knowledge
- training set
- statistical language models
- machine learning