Pruned RNN-T for fast, memory-efficient ASR training.
Fangjun KuangLiyong GuoWei KangLong LinMingshuang LuoZengwei YaoDaniel PoveyPublished in: INTERSPEECH (2022)
Keyphrases
- memory efficient
- nearest neighbor
- recurrent neural networks
- speech recognition
- external memory
- automatic speech recognition
- training process
- training set
- training algorithm
- test set
- training data
- training examples
- supervised learning
- data sets
- multiple sequence alignment
- iterative deepening
- back propagation
- training samples
- object detection
- training phase
- pattern recognition
- data structure
- pruning algorithm