Training Deep Neural Networks with 8-bit Floating Point Numbers.
Naigang WangJungwook ChoiDaniel BrandChia-Yu ChenKailash GopalakrishnanPublished in: CoRR (2018)
Keyphrases
- floating point
- neural network
- training process
- feedforward neural networks
- training algorithm
- multi layer perceptron
- fixed point
- square root
- feed forward neural networks
- artificial neural networks
- sparse matrices
- instruction set
- interval arithmetic
- neural network model
- back propagation
- image processing
- recurrent neural networks
- general purpose
- multilayer perceptron
- pairwise
- fast fourier transform
- bayesian networks
- deep architectures