Compressing DMA Engine: Leveraging Activation Sparsity for Training Deep Neural Networks.
Minsoo RhuMike O'ConnorNiladrish ChatterjeeJeff PoolYoungeun KwonStephen W. KecklerPublished in: HPCA (2018)
Keyphrases
- neural network
- training process
- training algorithm
- multi layer perceptron
- back propagation
- feedforward neural networks
- backpropagation algorithm
- high dimensional
- pattern recognition
- neural network training
- training patterns
- feed forward neural networks
- training set
- genetic algorithm
- artificial neural networks
- fuzzy systems
- radial basis function network
- recurrent neural networks
- multilayer perceptron
- neural network model
- fault diagnosis
- information processing
- training samples
- recurrent networks
- fuzzy logic
- deep learning
- deep belief networks
- activation function
- neural nets
- main memory
- sparse representation
- semi supervised
- hidden markov models
- learning algorithm
- machine learning