LLM in a flash: Efficient Large Language Model Inference with Limited Memory.
Keivan AlizadehIman MirzadehDmitry BelenkoKaren KhatamifardMinsik ChoCarlo C. Del MundoMohammad RastegariMehrdad FarajtabarPublished in: CoRR (2023)
Keyphrases
- limited memory
- language model
- language modeling
- n gram
- memory space
- information retrieval
- speech recognition
- language modelling
- data streams
- query expansion
- influence diagrams
- document retrieval
- retrieval model
- real time
- probabilistic model
- sliding window
- test collection
- mixture model
- bayesian networks
- quasi newton method
- context sensitive
- probabilistic inference
- ad hoc information retrieval
- query terms
- dynamic programming
- statistical language models