Distributed B-SDLM: Accelerating the Training Convergence of Deep Neural Networks Through Parallelism.
Shan Sung LiewMohamed Khalil HaniRabia BakhteriPublished in: PRICAI (2016)
Keyphrases
- neural network
- training process
- training algorithm
- multi layer perceptron
- distributed systems
- deep architectures
- neural network training
- feedforward neural networks
- feed forward neural networks
- error back propagation
- distributed environment
- cooperative
- training set
- pattern recognition
- multi agent
- feed forward
- neural nets
- fuzzy systems
- backpropagation algorithm
- neural network structure
- neural network model
- computer networks
- training examples
- test set
- convergence rate
- weight update
- peer to peer
- training patterns
- training samples
- distributed data
- distributed computing
- fault diagnosis
- recurrent neural networks
- multi agent systems
- evolutionary algorithm
- parallel processing
- commodity hardware
- fuzzy logic
- multilayer perceptron
- online training
- sufficient conditions
- hidden layer
- iterative algorithms
- cloud computing
- computing environments
- massively parallel
- self organizing maps
- mobile agents