MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression.
Tianyu FuHaofeng HuangXuefei NingGenghan ZhangBoju ChenTianqi WuHongyi WangZixiao HuangShiyao LiShengen YanGuohao DaiHuazhong YangYu WangPublished in: CoRR (2024)
Keyphrases
- language model
- mixture model
- language modeling
- n gram
- probabilistic model
- document retrieval
- information retrieval
- test collection
- query expansion
- speech recognition
- context sensitive
- language modelling
- retrieval model
- ad hoc information retrieval
- statistical language models
- query terms
- smoothing methods
- language model for information retrieval
- vector space model
- pseudo relevance feedback
- translation model
- document ranking
- query specific
- retrieval effectiveness
- expectation maximization
- high dimensional
- document length
- word error rate
- language models for information retrieval