BESA: Pruning Large Language Models with Blockwise Parameter-Efficient Sparsity Allocation.
Peng XuWenqi ShaoMengzhao ChenShitao TangKaipeng ZhangPeng GaoFengwei AnYu QiaoPing LuoPublished in: ICLR (2024)
Keyphrases
- language model
- language modeling
- probabilistic model
- n gram
- statistical language models
- speech recognition
- document retrieval
- information retrieval
- retrieval model
- vector space model
- language modelling
- query expansion
- high dimensional
- ad hoc information retrieval
- relevance model
- smoothing methods
- test collection
- machine learning
- document ranking
- bayesian networks
- translation model
- passage retrieval
- pseudo relevance feedback
- document length
- word error rate
- query terms