Shifted and Squeezed 8-bit Floating Point format for Low-Precision Training of Deep Neural Networks.
Léopold CambierAnahita BhiwandiwallaTing GongMehran NekuiiOguz H. ElibolHanlin TangPublished in: CoRR (2020)
Keyphrases
- floating point
- neural network
- training process
- square root
- training algorithm
- fixed point
- feedforward neural networks
- multi layer perceptron
- artificial neural networks
- feed forward neural networks
- floating point arithmetic
- back propagation
- deep architectures
- fast fourier transform
- multilayer perceptron
- instruction set
- interval arithmetic
- sparse matrices
- training set
- recurrent neural networks
- data processing
- multi view