Prefixing Attention Sinks can Mitigate Activation Outliers for Large Language Model Quantization.
Seungwoo SonWonpyo ParkWoohyun HanKyuyeun KimJaeho LeePublished in: CoRR (2024)
Keyphrases
- language model
- language modeling
- n gram
- document retrieval
- probabilistic model
- information retrieval
- speech recognition
- language modelling
- test collection
- ad hoc information retrieval
- retrieval model
- translation model
- mixture model
- vector space model
- query expansion
- document ranking
- statistical language models
- context sensitive
- smoothing methods
- statistical model
- pseudo feedback