Parallel and distributed training of neural networks via successive convex approximation.
Paolo Di LorenzoSimone ScardapanePublished in: MLSP (2016)
Keyphrases
- neural network
- training process
- training algorithm
- feedforward neural networks
- backpropagation algorithm
- distributed processing
- distributed systems
- back propagation
- feed forward neural networks
- master slave
- neural network training
- convex functions
- neural network structure
- distributed environment
- convex sets
- multi layer perceptron
- cooperative
- multi agent
- parallel database systems
- peer to peer
- convex hull
- multi layer
- artificial neural networks
- parallel processing
- error back propagation
- neural network model
- fault diagnosis
- training samples
- continuous functions
- sufficient conditions
- cloud computing
- load balance
- parallel execution
- test set
- map reduce
- closed form
- feed forward
- training phase
- recurrent neural networks