Improving training time of Hessian-free optimization for deep neural networks using preconditioning and sampling.
Tara N. SainathLior HoreshBrian KingsburyAleksandr Y. AravkinBhuvana RamabhadranPublished in: CoRR (2013)
Keyphrases
- neural network
- training algorithm
- training process
- backpropagation algorithm
- feedforward neural networks
- conjugate gradient
- optimization algorithm
- training set
- multi layer perceptron
- error back propagation
- deep architectures
- back propagation
- optimization problems
- iterative methods
- artificial neural networks
- feed forward neural networks
- constrained optimization
- global optimization
- hidden layer
- fuzzy logic
- pattern recognition
- edge preserving
- neural network training
- recurrent neural networks
- stochastic gradient descent
- highly non linear
- genetic algorithm
- optimal solution
- feature space
- training phase
- training samples
- neural nets
- optimization method