C
search
search
reviewers
reviewers
feeds
feeds
assignments
assignments
settings
logout
Improved Knowledge Distillation for Pre-trained Language Models via Knowledge Selection.
Chenglong Wang
Yi Lu
Yongyu Mu
Yimin Hu
Tong Xiao
Jingbo Zhu
Published in:
EMNLP (Findings) (2022)
Keyphrases
</>
language model
prior knowledge
hidden markov models
n gram
speech recognition
document retrieval