An 8.93 TOPS/W LSTM Recurrent Neural Network Accelerator Featuring Hierarchical Coarse-Grain Sparsity for On-Device Speech Recognition.
Deepak KadetotadShihui YinVisar BerishaChaitali ChakrabartiJae-sun SeoPublished in: IEEE J. Solid State Circuits (2020)
Keyphrases
- recurrent neural networks
- speech recognition
- coarse grain
- fine grain
- long short term memory
- language model
- feed forward
- pattern recognition
- hidden markov models
- speech synthesis
- recurrent networks
- neural network
- automatic speech recognition
- speech recognition technology
- speech processing
- artificial neural networks
- speech recognizer
- hidden layer
- noisy environments
- speaker identification
- speech signal
- parallel computation
- speech recognition systems
- high dimensional
- parallel implementation
- speaker independent
- speaker dependent
- audio visual speech recognition
- speaker adaptation