Login / Signup
Robust Knowledge Distillation from RNN-T Models With Noisy Training Labels Using Full-Sum Loss.
Mohammad Zeineldeen
Kartik Audhkhasi
Murali Karthick Baskar
Bhuvana Ramabhadran
Published in:
CoRR (2023)
Keyphrases
</>
prior knowledge
domain knowledge
knowledge acquisition
knowledge base
statistical models
feature selection
knowledge representation
data sets
online learning
knowledge based systems
nearest neighbor
formal models
incomplete data
domain experts
computationally efficient
training set
training data
neural network