An Energy-Efficient Architecture for Accelerating Inference of Memory-Augmented Neural Networks.
Jianxun YangLeibo LiuJin ZhangShaojun WeiShouyi YinPublished in: NANOARCH (2019)
Keyphrases
- neural network
- associative memory
- network architecture
- inference engine
- software architecture
- multi layer
- auto associative
- wireless sensor networks
- back propagation
- pattern recognition
- neural network model
- artificial neural networks
- memory management
- memory space
- memory access
- neural network structure
- real time
- level parallelism
- probabilistic inference
- fuzzy logic
- bayesian networks
- hardware implementation
- training process
- fuzzy neural network
- recurrent neural networks
- energy efficient
- memory requirements
- inference process
- power consumption
- multithreading
- energy consumption
- memory hierarchy
- expert systems