Login / Signup

Logarithm-approximate floating-point multiplier is applicable to power-efficient neural network training.

TaiYu ChengYukata MasudaJun ChenJaehoon YuMasanori Hashimoto
Published in: Integr. (2020)
Keyphrases
  • floating point
  • sparse matrices
  • neural network training
  • fixed point
  • artificial intelligence
  • neural network
  • feature vectors
  • expert systems
  • dynamic programming
  • instruction set
  • floating point arithmetic