Listen Attentively, and Spell Once: Whole Sentence Generation via a Non-Autoregressive Architecture for Low-Latency Speech Recognition.
Ye BaiJiangyan YiJianhua TaoZhengkun TianZhengqi WenShuai ZhangPublished in: INTERSPEECH (2020)
Keyphrases
- speech recognition
- autoregressive
- low latency
- real time
- non stationary
- hidden markov models
- speech signal
- language model
- pattern recognition
- high throughput
- automatic speech recognition
- random fields
- highly efficient
- high speed
- stream processing
- speech recognition systems
- linear prediction
- virtual machine
- natural language
- ad hoc networks
- sar images
- wireless networks
- probabilistic model
- computer vision