InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management.
Wonbeom LeeJungi LeeJunghwan SeoJaewoong SimPublished in: CoRR (2024)
Keyphrases
- language model
- language modeling
- probabilistic model
- cache management
- document retrieval
- context sensitive
- generative model
- n gram
- speech recognition
- language models for information retrieval
- query expansion
- retrieval model
- language modelling
- information retrieval
- bayesian networks
- spoken term detection
- relevance model
- test collection
- data model
- data structure