LLM in a flash: Efficient Large Language Model Inference with Limited Memory.
Keivan AlizadehSeyed-Iman MirzadehDmitry BelenkoS. KhatamifardMinsik ChoCarlo C. del MundoMohammad RastegariMehrdad FarajtabarPublished in: ACL (1) (2024)
Keyphrases
- limited memory
- language model
- language modeling
- n gram
- query expansion
- memory space
- sliding window
- speech recognition
- probabilistic model
- document retrieval
- real time
- retrieval model
- information retrieval
- statistical language models
- context sensitive
- data streams
- influence diagrams
- language modelling
- bayesian networks
- quasi newton method
- ad hoc information retrieval
- probabilistic inference
- belief networks