Co²PT: Mitigating Bias in Pre-trained Language Models through Counterfactual Contrastive Prompt Tuning.
Xiangjue DongZiwei ZhuZhuoer WangMaria TelekiJames CaverleePublished in: EMNLP (Findings) (2023)
Keyphrases
- language model
- pre trained
- language modeling
- training data
- n gram
- probabilistic model
- document retrieval
- information retrieval
- language modelling
- speech recognition
- training examples
- query expansion
- statistical language models
- retrieval model
- smoothing methods
- control signals
- test collection
- relevance model
- language models for information retrieval
- document ranking
- data sets