Training Deep Neural Networks with 8-bit Floating Point Numbers.
Naigang WangJungwook ChoiDaniel BrandChia-Yu ChenKailash GopalakrishnanPublished in: NeurIPS (2018)
Keyphrases
- floating point
- neural network
- training process
- training algorithm
- fixed point
- square root
- feed forward neural networks
- multi layer perceptron
- feedforward neural networks
- sparse matrices
- artificial neural networks
- deep architectures
- instruction set
- floating point arithmetic
- interval arithmetic
- fast fourier transform
- neural network model
- back propagation
- training set
- multilayer perceptron