8T XNOR-SRAM based Parallel Compute-in-Memory for Deep Neural Network Accelerator.
Hongwu JiangRui LiuShimeng YuPublished in: MWSCAS (2020)
Keyphrases
- neural network
- compute intensive
- parallel implementation
- auto associative
- random access memory
- parallel hardware
- artificial neural networks
- associative memory
- computer architecture
- memory usage
- multi threaded
- memory footprint
- processing elements
- neural network model
- power consumption
- shared memory
- memory requirements
- back propagation
- computational power
- neural network is trained
- parallel programming
- recurrent neural networks
- radial basis function
- self organizing maps
- genetic algorithm
- computing power
- parallel computers
- data transfer
- data transmission
- network architecture
- low voltage
- multilayer perceptron