Hyper-parameter optimization for support vector machines using stochastic gradient descent and dual coordinate descent.
Wei JiangSauleh SiddiquiPublished in: EURO J. Comput. Optim. (2020)
Keyphrases
- stochastic gradient descent
- hyperparameters
- support vector
- regularization parameter
- loss function
- cross validation
- support vector machine
- model selection
- step size
- matrix factorization
- least squares
- logistic regression
- random sampling
- bayesian inference
- bayesian framework
- closed form
- random forests
- kernel function
- weight vector
- maximum likelihood
- em algorithm
- incremental learning
- linear svm
- binary classification
- prior information
- kernel methods
- incomplete data
- multiple kernel learning
- noise level
- parameter settings
- hyperplane
- sample size
- generalization ability
- maximum a posteriori
- support vectors
- parameter space
- importance sampling
- online algorithms
- missing data