• search
    search
  • reviewers
    reviewers
  • feeds
    feeds
  • assignments
    assignments
  • settings
  • logout

ReAugKD: Retrieval-Augmented Knowledge Distillation For Pre-trained Language Models.

Jianyi ZhangAashiq MuhamedAditya AnantharamanGuoyin WangChangyou ChenKai ZhongQingjun CuiYi XuBelinda ZengTrishul ChilimbiYiran Chen
Published in: ACL (2) (2023)
Keyphrases