Login / Signup

Improving Inference Latency and Energy of Network-on-Chip based Convolutional Neural Networks through Weights Compression.

Giuseppe AsciaVincenzo CataniaJohn JoseSalvatore MonteleoneMaurizio PalesiDavide Patti
Published in: IPDPS Workshops (2020)
Keyphrases
  • convolutional neural networks
  • network on chip
  • data transfer
  • routing algorithm
  • energy consumption
  • convolutional network
  • energy efficiency
  • energy efficient
  • multi processor
  • low latency
  • network simulator