A Communication-Efficient Distributed Gradient Clipping Algorithm for Training Deep Neural Networks.
Mingrui LiuZhenxun ZhuangYunwei LeiChunyang LiaoPublished in: CoRR (2022)
Keyphrases
- neural network
- training algorithm
- optimization algorithm
- feed forward neural networks
- learning algorithm
- simulated annealing
- worst case
- training process
- k means
- training phase
- optimal solution
- dynamic programming
- single pass
- classification algorithm
- expectation maximization
- computationally efficient
- feedforward neural networks
- computer networks
- neural network training
- distributed systems
- computational cost
- pattern recognition
- objective function
- peer to peer
- matching algorithm
- probabilistic model
- search space
- computational complexity
- multi agent
- gradient method