Dimension-free convergence rates for gradient Langevin dynamics in RKHS.
Boris MuzellecKanji SatoMathurin MassiasTaiji SuzukiPublished in: COLT (2022)
Keyphrases
- convergence rate
- gradient method
- gaussian kernels
- convergence speed
- reproducing kernel hilbert space
- step size
- learning rate
- global convergence
- kernel methods
- numerical stability
- loss function
- conjugate gradient
- random projections
- stopping criterion
- primal dual
- dynamical systems
- machine learning
- number of iterations required
- kernel function