Multiple-hypothesis RNN-T Loss for Unsupervised Fine-tuning and Self-training of Neural Transducer.
Cong-Thanh DoMohan LiRama DoddipatlaPublished in: INTERSPEECH (2022)
Keyphrases
- fine tuning
- multiple hypothesis
- recurrent neural networks
- semi supervised
- neural model
- nearest neighbor
- network architecture
- fine tune
- neural network
- particle filter
- viable alternative
- supervised learning
- hebbian learning
- semi supervised learning
- unsupervised learning
- training set
- co training
- data driven
- fine tuned
- unsupervised manner
- neural fuzzy
- cost sensitive
- neural computation
- neural learning
- artificial neural
- genetic algorithm
- feature selection
- spike trains
- image segmentation
- supervised classification
- dynamic programming
- associative memory