Accelerating Training for Distributed Deep Neural Networks in MapReduce.
Jie XuJingyu WangQi QiHaifeng SunJianxin LiaoPublished in: ICWS (2018)
Keyphrases
- neural network
- training process
- training algorithm
- feedforward neural networks
- multi layer perceptron
- distributed computing
- cooperative
- distributed systems
- feed forward neural networks
- multi agent
- data intensive
- distributed processing
- back propagation
- backpropagation algorithm
- feed forward
- neural network training
- training set
- pattern recognition
- training patterns
- recurrent networks
- deep architectures
- fault tolerant
- fuzzy logic
- training phase
- neural network model
- data partitioning
- artificial neural networks
- communication cost
- distributed data
- recurrent neural networks
- mobile agents
- deep learning
- test set
- training examples
- cloud computing
- supervised learning