Reducing the Energy Dissipation of Large Language Models (LLMs) with Approximate Memories.
Zhen GaoJie DengPedro ReviriegoShanshan LiuFabrizio LombardiPublished in: ISCAS (2024)
Keyphrases
- language model
- energy dissipation
- language modeling
- n gram
- document retrieval
- probabilistic model
- information retrieval
- low power
- speech recognition
- test collection
- retrieval model
- power supply
- query expansion
- context sensitive
- language modelling
- statistical language models
- traffic flow
- pseudo relevance feedback
- document ranking
- language models for information retrieval
- vector space model
- smoothing methods
- long range
- low cost
- relevant documents
- energy consumption
- statistical language modeling
- spoken term detection