On the Hyperparameters in Stochastic Gradient Descent with Momentum.
Bin ShiPublished in: CoRR (2021)
Keyphrases
- stochastic gradient descent
- hyperparameters
- regularization parameter
- model selection
- cross validation
- support vector
- closed form
- bayesian inference
- random sampling
- bayesian framework
- loss function
- em algorithm
- gaussian process
- learning rate
- incremental learning
- maximum likelihood
- prior information
- noise level
- sample size
- least squares
- posterior distribution
- maximum a posteriori
- incomplete data
- missing values
- support vector machine
- parameter settings
- matrix factorization
- parameter estimation
- expectation maximization
- random forests
- generative model
- decision trees
- logistic regression
- noisy images
- probabilistic model
- image restoration
- parameter space
- active learning
- learning algorithm
- similarity measure
- multiple kernel learning