MiCRO: Near-Zero Cost Gradient Sparsification for Scaling and Accelerating Distributed DNN Training.
Daegun YoonSangyoon OhPublished in: HiPC (2023)
Keyphrases
- training process
- communication cost
- distributed systems
- high cost
- cooperative
- neural network
- distributed environment
- test set
- fault tolerant
- multi agent
- communication overhead
- least squares
- online learning
- cost sensitive
- distributed data
- supervised learning
- training set
- radial basis function
- computer networks
- reinforcement learning
- training phase
- data sets