Compressing DMA Engine: Leveraging Activation Sparsity for Training Deep Neural Networks.
Minsoo RhuMike O'ConnorNiladrish ChatterjeeJeff PoolStephen W. KecklerPublished in: CoRR (2017)
Keyphrases
- artificial neural networks
- neural network
- feed forward neural networks
- back propagation
- multi layer perceptron
- training algorithm
- backpropagation algorithm
- neural network model
- feedforward neural networks
- feed forward
- training process
- multilayer perceptron
- genetic algorithm
- neural network training
- hidden layer
- recurrent neural networks
- training samples
- training set
- error back propagation
- deep architectures
- training patterns
- information processing
- supervised learning
- air fuel ratio
- data sets
- deep belief networks
- active learning
- support vector machine
- computer systems
- network architecture
- test set
- multi layer
- data compression