Improving Neural Network Efficiency via Post-training Quantization with Adaptive Floating-Point.
Fangxin LiuWenbo ZhaoZhezhi HeYanzhi WangZongwu WangChangzhi DaiXiaoyao LiangLi JiangPublished in: ICCV (2021)
Keyphrases
- floating point
- neural network
- training process
- training algorithm
- square root
- feed forward neural networks
- feedforward neural networks
- fixed point
- multi layer perceptron
- instruction set
- floating point arithmetic
- sparse matrices
- artificial neural networks
- computational complexity
- data processing
- reinforcement learning
- image processing