Shifted and Squeezed 8-bit Floating Point format for Low-Precision Training of Deep Neural Networks.
Léopold CambierAnahita BhiwandiwallaTing GongOguz H. ElibolMehran NekuiiHanlin TangPublished in: ICLR (2020)
Keyphrases
- floating point
- neural network
- training process
- square root
- training algorithm
- fixed point
- feedforward neural networks
- feed forward neural networks
- multi layer perceptron
- sparse matrices
- training set
- neural network model
- back propagation
- instruction set
- interval arithmetic
- general purpose
- artificial neural networks
- deep architectures
- recurrent neural networks
- frequency domain
- higher order
- probabilistic model
- fast fourier transform
- image processing