ServerlessLLM: Low-Latency Serverless Inference for Large Language Models.
Yao FuLeyang XueYeqi HuangAndrei-Octavian BrabeteDmitrii UstiugovYuvraj PatelLuo MaiPublished in: OSDI (2024)
Keyphrases
- language model
- low latency
- language modeling
- high speed
- high throughput
- document retrieval
- n gram
- real time
- probabilistic model
- information retrieval
- query expansion
- speech recognition
- language modelling
- statistical language models
- retrieval model
- virtual machine
- language models for information retrieval
- test collection
- highly efficient
- bayesian networks
- relevance model
- document ranking
- query terms
- stream processing
- low cost
- smoothing methods
- search engine
- machine learning