Login / Signup
In Defense of Pure 16-bit Floating-Point Neural Networks.
Juyoung Yun
Byungkon Kang
François Rameau
Zhoulai Fu
Published in:
CoRR (2023)
Keyphrases
</>
floating point
neural network
square root
fixed point
artificial neural networks
instruction set
sparse matrices
intrusion detection
back propagation
advanced research projects agency
floating point arithmetic
information systems
np hard
probabilistic model
interval arithmetic