Login / Signup

TensorDash: Exploiting Sparsity to Accelerate Deep Neural Network Training.

Mostafa MahmoudIsak EdoAli Hadi ZadehOmar Mohamed AwadGennady PekhimenkoJorge AlbericioAndreas Moshovos
Published in: MICRO (2020)
Keyphrases
  • neural network training
  • neural network
  • training algorithm
  • optimization method
  • particle swarm optimisation
  • evolutionary algorithm
  • back propagation
  • data sets
  • metadata
  • genetic programming