A Logarithmic Floating-Point Multiplier for the Efficient Training of Neural Networks.
Zijing NiuHonglan JiangMohammad Saeed AnsariBruce F. CockburnLeibo LiuJie HanPublished in: ACM Great Lakes Symposium on VLSI (2021)
Keyphrases
- floating point
- neural network
- sparse matrices
- fixed point
- square root
- training algorithm
- training process
- floating point arithmetic
- instruction set
- fast fourier transform
- feedforward neural networks
- artificial neural networks
- back propagation
- higher order
- interval arithmetic
- image processing
- multi layer perceptron
- recurrent neural networks
- image segmentation