Less is More: Task-aware Layer-wise Distillation for Language Model Compression.
Chen LiangSimiao ZuoQingru ZhangPengcheng HeWeizhu ChenTuo ZhaoPublished in: ICML (2023)
Keyphrases
- language model
- language modeling
- n gram
- probabilistic model
- document retrieval
- language modelling
- information retrieval
- retrieval model
- test collection
- speech recognition
- query expansion
- smoothing methods
- pairwise
- mixture model
- statistical language models
- ad hoc information retrieval
- context sensitive
- vector space model
- language model for information retrieval
- translation model
- relevance model
- pseudo relevance feedback
- query terms
- document ranking
- word error rate
- query specific
- generative model
- web search
- machine learning