EmbRace: Accelerating Sparse Communication for Distributed Training of Deep Neural Networks.
Shengwei LiZhiquan LaiDongsheng LiYiming ZhangXiangyu YeYabo DuanPublished in: ICPP (2022)
Keyphrases
- neural network
- communication overhead
- training process
- communication cost
- training algorithm
- distributed control
- computer networks
- distributed systems
- multi layer perceptron
- spatially distributed
- feedforward neural networks
- cooperative
- distributed computation
- backpropagation algorithm
- feed forward neural networks
- error back propagation
- distributed environment
- sparse data
- open systems
- fully distributed
- back propagation
- deep architectures
- pattern recognition
- self organizing maps
- distributed network
- training phase
- multi party
- group communication
- genetic algorithm
- neural network training
- multi layer
- multi agent
- neural network model
- hearing impaired
- avoid overfitting
- feed forward
- training set
- artificial neural networks
- activation function
- fuzzy logic
- training examples
- interprocess communication
- neural network structure
- support vector machine
- mobile agents
- fault diagnosis
- multimedia communication
- communication networks
- recurrent neural networks
- communication systems