Login / Signup
Convergence Rates of Non-Convex Stochastic Gradient Descent Under a Generic Lojasiewicz Condition and Local Smoothness.
Kevin Scaman
Cédric Malherbe
Ludovic Dos Santos
Published in:
ICML (2022)
Keyphrases
</>
convergence rate
stochastic gradient descent
step size
convergence speed
number of iterations required
learning rate
loss function
convex optimization
least squares
cost function
matrix factorization
convex hull
objective function
missing data
prior information
pairwise
random forests
learning algorithm