Rewire-then-Probe: A Contrastive Recipe for Probing Biomedical Knowledge of Pre-trained Language Models.
Zaiqiao MengFangyu LiuEhsan ShareghiYixuan SuCharlotte CollinsNigel CollierPublished in: ACL (1) (2022)
Keyphrases
- language model
- pre trained
- language modeling
- document retrieval
- speech recognition
- probabilistic model
- information retrieval
- n gram
- statistical language models
- knowledge discovery
- query expansion
- prior knowledge
- retrieval model
- smoothing methods
- document ranking
- ad hoc information retrieval
- language models for information retrieval
- small number
- neural network
- relevance model
- training data
- bayesian networks
- test collection
- text mining