Login / Signup
LNPU: A 25.3TFLOPS/W Sparse Deep-Neural-Network Learning Processor with Fine-Grained Mixed Precision of FP8-FP16.
Jinsu Lee
Juhyoung Lee
Donghyeon Han
Jinmook Lee
Gwangtae Park
Hoi-Jun Yoo
Published in:
ISSCC (2019)
Keyphrases
</>
fine grained
neural network
coarse grained
learning process
access control
tightly coupled
learning algorithm
artificial neural networks
incremental learning
massively parallel
deep learning