SparseNN: An Energy-Efficient Neural Network Accelerator Exploiting Input and Output Sparsity.
Jingyang ZhuJingbo JiangXizi ChenChi-Ying TsuiPublished in: CoRR (2017)
Keyphrases
- desired output
- neural network
- hidden units
- input variables
- input data
- multiple output
- rbf network
- connection weights
- artificial neural networks
- feed forward neural networks
- bayes rule
- network architecture
- activation function
- hidden layer
- back propagation
- number of hidden units
- pattern recognition
- recurrent neural networks
- neural network model
- input pattern
- variable selection
- high dimensional
- control signals
- wireless sensor networks
- parallel implementation
- energy efficient
- feed forward
- training algorithm
- self organizing maps
- nonlinear functions
- sparse representation
- feedforward neural networks
- input patterns
- data sets
- network model
- genetic algorithm
- neural network is trained
- sigmoid function
- neural nets