Effective Post-Training Quantization Of Neural Networks For Inference on Low Power Neural Accelerator.
Alexander DemidovskijEugene SmirnovPublished in: IJCNN (2020)
Keyphrases
- low power
- neural network
- power consumption
- high speed
- low cost
- training process
- network architecture
- single chip
- artificial neural networks
- pattern recognition
- vlsi circuits
- multi layer perceptron
- high power
- recurrent networks
- logic circuits
- gate array
- low power consumption
- image sensor
- parallel implementation
- back propagation
- image processing