Distributed Training of Deep Neural Networks: Theoretical and Practical Limits of Parallel Scalability.
Janis KeuperFranz-Josef PfreundtPublished in: MLHPC@SC (2016)
Keyphrases
- neural network
- training process
- training algorithm
- load balance
- map reduce
- parallel database systems
- distributed processing
- feedforward neural networks
- feed forward neural networks
- fault tolerant
- theoretical underpinnings
- master slave
- multi layer perceptron
- neural network training
- distributed systems
- high scalability
- parallel processing
- fault tolerance
- cooperative
- distributed environment
- fully distributed
- backpropagation algorithm
- pc cluster
- scalable distributed
- genetic algorithm
- training examples
- theoretical considerations
- artificial neural networks
- parallel search
- supervised learning
- recurrent networks
- self organizing maps
- response time
- recurrent neural networks
- learning algorithm
- pattern recognition
- error back propagation
- deep architectures
- parallel implementation
- neural network structure
- fuzzy logic
- multi layer
- back propagation
- load balancing
- peer to peer
- multiple independent
- radial basis function
- feed forward
- activation function
- multilayer perceptron
- mobile agents
- parallel execution
- neural network model
- radial basis function network