Robust Knowledge Distillation from RNN-T Models with Noisy Training Labels Using Full-Sum Loss.
Mohammad ZeineldeenKartik AudhkhasiMurali Karthick BaskarBhuvana RamabhadranPublished in: ICASSP (2023)
Keyphrases
- prior knowledge
- computationally efficient
- training set
- domain knowledge
- knowledge discovery
- knowledge management
- knowledge acquisition
- recurrent neural networks
- complex systems
- training examples
- knowledge sharing
- knowledge based systems
- neural network model
- formal models
- data mining
- subject matter experts
- incomplete data
- background knowledge
- training samples
- image classification
- support vector machine
- knowledge representation
- knowledge base
- learning algorithm