Maximizing Parallelism in Distributed Training for Huge Neural Networks.
Zhengda BianQifan XuBoxiang WangYang YouPublished in: CoRR (2021)
Keyphrases
- neural network
- training process
- training algorithm
- distributed systems
- feed forward neural networks
- pattern recognition
- feedforward neural networks
- neural network training
- supervised learning
- back propagation
- multi agent
- artificial neural networks
- training patterns
- distributed environment
- multi layer perceptron
- data transfer
- error back propagation
- fuzzy logic
- neural nets
- genetic algorithm
- backpropagation algorithm
- recurrent networks
- radial basis function network
- training phase
- shared memory
- feed forward
- self organizing maps
- test set
- training examples
- parallel computing
- fine grain
- distributed computing
- associative memory
- cooperative
- neural network structure
- multi layer