Multiple-hypothesis RNN-T Loss for Unsupervised Fine-tuning and Self-training of Neural Transducer.
Cong-Thanh DoMohan LiRama DoddipatlaPublished in: CoRR (2022)
Keyphrases
- fine tuning
- multiple hypothesis
- recurrent neural networks
- semi supervised
- nearest neighbor
- neural model
- fine tune
- particle filter
- neural network
- viable alternative
- hebbian learning
- supervised learning
- semi supervised learning
- training set
- network architecture
- unsupervised learning
- co training
- data driven
- fine tuned
- completely unsupervised
- neural learning
- artificial neural
- neural computation
- cost sensitive
- semi supervised classification
- learning rules
- associative memory
- back propagation
- active learning
- artificial neural networks
- search algorithm
- training data
- decision trees