• search
    search
  • reviewers
    reviewers
  • feeds
    feeds
  • assignments
    assignments
  • settings
  • logout

A 28nm 276.55TFLOPS/W Sparse Deep-Neural-Network Training Processor with Implicit Redundancy Speculation and Batch Normalization Reformulation.

Yang WangYubin QinDazheng DengJingchuan WeiTianbao ChenXinhan LinLeibo LiuShaojun WeiShouyi Yin
Published in: VLSI Circuits (2021)
Keyphrases
  • neural network training
  • neural network
  • training algorithm
  • optimization method
  • genetic algorithm
  • machine learning
  • artificial neural networks