Theoretical Interpretation of Learned Step Size in Deep-Unfolded Gradient Descent.
Satoshi TakabeTadashi WadayamaPublished in: CoRR (2020)
Keyphrases
- step size
- cost function
- convergence rate
- stochastic gradient descent
- convergence speed
- evolutionary programming
- faster convergence
- variable step size
- steepest descent method
- objective function
- conjugate gradient
- gradient method
- adaptive filter
- hessian matrix
- non stationary
- optimization problems
- simulated annealing
- least mean square