Login / Signup

LNPU: A 25.3TFLOPS/W Sparse Deep-Neural-Network Learning Processor with Fine-Grained Mixed Precision of FP8-FP16.

Jinsu LeeJuhyoung LeeDonghyeon HanJinmook LeeGwangtae ParkHoi-Jun Yoo
Published in: ISSCC (2019)
Keyphrases
  • fine grained
  • neural network
  • coarse grained
  • learning process
  • access control
  • tightly coupled
  • learning algorithm
  • artificial neural networks
  • incremental learning
  • massively parallel
  • deep learning