Mixture-of-Domain-Adapters: Decoupling and Injecting Domain Knowledge to Pre-trained Language Models Memories.
Shizhe DiaoTianyang XuRuijia XuJiawei WangTong ZhangPublished in: CoRR (2023)
Keyphrases
- language model
- domain knowledge
- pre trained
- mixture model
- language modeling
- input output
- n gram
- document retrieval
- probabilistic model
- language modelling
- speech recognition
- language models for information retrieval
- information retrieval
- query expansion
- test collection
- retrieval model
- statistical language models
- smoothing methods
- training data
- relevance model
- small number
- feature selection
- multi modal
- supervised learning
- document ranking
- machine learning
- data sets