Asynchronous stochastic optimization for sequence training of deep neural networks.
Georg HeigoldErik McDermottVincent VanhouckeAndrew W. SeniorMichiel BacchianiPublished in: ICASSP (2014)
Keyphrases
- stochastic optimization
- neural network
- training process
- multistage
- training algorithm
- feed forward neural networks
- feedforward neural networks
- deep architectures
- neural network model
- back propagation
- multi layer perceptron
- training set
- robust optimization
- backpropagation algorithm
- multilayer perceptron
- dynamic programming
- artificial neural networks
- neural network structure
- lot sizing
- training dataset
- training data
- decision making