A Scalable Near-Memory Architecture for Training Deep Neural Networks on Large In-Memory Datasets.
Fabian SchuikiMichael SchaffnerFrank K. GürkaynakLuca BeniniPublished in: IEEE Trans. Computers (2019)
Keyphrases
- neural network
- associative memory
- low memory
- training algorithm
- main memory
- memory management
- recurrent neural networks
- computing power
- memory usage
- memory requirements
- computational power
- pattern recognition
- limited memory
- training process
- genetic algorithm
- training dataset
- training phase
- back propagation
- feed forward neural networks
- feedforward neural networks
- memory access
- memory size
- auto associative
- real time