Login / Signup
Logarithm-approximate floating-point multiplier is applicable to power-efficient neural network training.
TaiYu Cheng
Yukata Masuda
Jun Chen
Jaehoon Yu
Masanori Hashimoto
Published in:
Integr. (2020)
Keyphrases
</>
floating point
sparse matrices
neural network training
fixed point
artificial intelligence
neural network
feature vectors
expert systems
dynamic programming
instruction set
floating point arithmetic