Mixture-of-Domain-Adapters: Decoupling and Injecting Domain Knowledge to Pre-trained Language Models' Memories.
Shizhe DiaoTianyang XuRuijia XuJiawei WangTong ZhangPublished in: ACL (1) (2023)
Keyphrases
- language model
- domain knowledge
- pre trained
- mixture model
- input output
- language modeling
- information retrieval
- n gram
- probabilistic model
- retrieval model
- speech recognition
- query expansion
- document retrieval
- test collection
- smoothing methods
- language modelling
- training data
- statistical language models
- language models for information retrieval
- training examples
- document ranking
- data sets
- em algorithm
- small number
- relevance model