• search
    search
  • reviewers
    reviewers
  • feeds
    feeds
  • assignments
    assignments
  • settings
  • logout

Co$^2$PT: Mitigating Bias in Pre-trained Language Models through Counterfactual Contrastive Prompt Tuning.

Xiangjue DongZiwei ZhuZhuoer WangMaria TelekiJames Caverlee
Published in: CoRR (2023)
Keyphrases