Accelerating deep neural network training with inconsistent stochastic gradient descent.
Linnan WangYi YangRenqiang MinSrimat T. ChakradharPublished in: Neural Networks (2017)
Keyphrases
- stochastic gradient descent
- neural network training
- neural network
- least squares
- training algorithm
- loss function
- matrix factorization
- random forests
- step size
- optimization method
- support vector machine
- importance sampling
- online algorithms
- weight vector
- regularization parameter
- multiple kernel learning
- back propagation
- learning rate
- hidden layer
- optimal solution
- artificial neural networks
- search space
- collaborative filtering
- cost function
- particle swarm optimization pso
- genetic programming