Mixed-precision architecture based on computational memory for training deep neural networks.
S. R. NandakumarManuel Le GalloIrem BoybatBipin RajendranAbu SebastianEvangelos EleftheriouPublished in: ISCAS (2018)
Keyphrases
- neural network
- training process
- associative memory
- feed forward neural networks
- training algorithm
- multi layer
- network architecture
- neural network structure
- multi layer perceptron
- feedforward neural networks
- computational power
- error back propagation
- deep architectures
- backpropagation algorithm
- auto associative
- pattern recognition
- training examples
- software architecture
- artificial neural networks
- learning capabilities
- management system
- test set
- recurrent networks
- memory hierarchy
- feed forward
- computing power
- activation function
- neural nets
- real time
- training set
- multilayer neural network
- fuzzy neural network
- processing elements
- memory management
- memory requirements
- precision and recall
- back propagation
- training samples
- memory access
- memory usage
- training phase
- expert systems
- fuzzy systems