KV Cache is 1 Bit Per Channel: Efficient Large Language Model Inference with Coupled Quantization.
Tianyi ZhangJonah YiZhaozhuo XuAnshumali ShrivastavaPublished in: CoRR (2024)
Keyphrases
- language model
- language modeling
- n gram
- document retrieval
- speech recognition
- probabilistic model
- language modelling
- information retrieval
- context sensitive
- language model for information retrieval
- query expansion
- relevance model
- query terms
- retrieval model
- test collection
- mixture model
- statistical language models
- ad hoc information retrieval
- translation model
- statistical machine translation
- smoothing methods
- word error rate
- image retrieval