A Communication-Efficient Distributed Gradient Clipping Algorithm for Training Deep Neural Networks.
Mingrui LiuZhenxun ZhuangYunwen LeiChunyang LiaoPublished in: NeurIPS (2022)
Keyphrases
- neural network
- detection algorithm
- single pass
- computationally efficient
- cost function
- computational cost
- high efficiency
- artificial neural networks
- preprocessing
- computational complexity
- pattern recognition
- optimal solution
- objective function
- learning algorithm
- dynamic programming
- fuzzy logic
- optimization algorithm
- classification algorithm
- distributed environment
- neural network training
- gradient information
- training phase
- matching algorithm
- peer to peer
- worst case
- search space