Automatic and Simultaneous Adjustment of Learning Rate and Momentum for Stochastic Gradient-based Optimization Methods.
Tomer LancewickiSelçuk KöprüPublished in: ICASSP (2020)
Keyphrases
- learning rate
- optimization methods
- stochastic methods
- learning algorithm
- optimization method
- simulated annealing
- stage stochastic programs
- convergence rate
- optimization problems
- convergence speed
- adaptive learning rate
- optimization approaches
- unconstrained optimization
- weight vector
- rapid convergence
- particle swarm
- global convergence
- training algorithm
- multilayer neural networks
- efficient optimization
- direct optimization
- genetic algorithm
- evolutionary algorithm
- gradient method
- metaheuristic
- bayesian network models
- evolution strategy