Less is More: Task-aware Layer-wise Distillation for Language Model Compression.
Chen LiangSimiao ZuoQingru ZhangPengcheng HeWeizhu ChenTuo ZhaoPublished in: CoRR (2022)
Keyphrases
- language model
- language modeling
- document retrieval
- n gram
- probabilistic model
- information retrieval
- speech recognition
- retrieval model
- language modelling
- query expansion
- statistical language models
- mixture model
- test collection
- query terms
- context sensitive
- translation model
- ad hoc information retrieval
- smoothing methods
- pairwise
- relevance model
- language model for information retrieval
- document length
- language models for information retrieval