FlashDecoding++: Faster Large Language Model Inference on GPUs.
Ke HongGuohao DaiJiaming XuQiuli MaoXiuhong LiJun LiuKangdi ChenYuhan DongYu WangPublished in: CoRR (2023)
Keyphrases
- language model
- language modeling
- n gram
- probabilistic model
- query expansion
- document retrieval
- speech recognition
- language modelling
- information retrieval
- statistical language models
- test collection
- retrieval model
- document ranking
- query terms
- context sensitive
- mixture model
- bayesian inference
- pseudo relevance feedback
- language models for information retrieval
- language model for information retrieval
- vector space model
- bayesian networks
- query specific
- word error rate
- pseudo feedback
- ad hoc information retrieval
- parameter estimation