Login / Signup
Training binary neural networks without floating point precision.
Federico Fontana
Published in:
CoRR (2023)
Keyphrases
</>
floating point
neural network
sparse matrices
training process
fixed point
training algorithm
feed forward neural networks
multi layer perceptron
square root
feedforward neural networks
back propagation
instruction set
floating point arithmetic
dynamic programming
higher order
post processing