Towards Anytime Fine-tuning: Continually Pre-trained Language Models with Hypernetwork Prompts.
Gangwei JiangCaigao JiangSiqiao XueJames ZhangJun ZhouDefu LianYing WeiPublished in: EMNLP (Findings) (2023)
Keyphrases
- fine tuning
- language model
- pre trained
- language modeling
- training data
- probabilistic model
- n gram
- speech recognition
- training examples
- information retrieval
- document retrieval
- retrieval model
- test collection
- language modelling
- query expansion
- fine tuned
- control signals
- statistical language models
- neural network
- relevance model
- document ranking
- smoothing methods
- supervised learning
- language models for information retrieval