Regularization in neural network optimization via trimmed stochastic gradient descent with noisy label.
Kensuke NakamuraByung-Woo HongPublished in: CoRR (2020)
Keyphrases
- stochastic gradient descent
- neural network
- stochastic gradient
- least squares
- matrix factorization
- loss function
- step size
- random forests
- regularization parameter
- early stopping
- multiple kernel learning
- online algorithms
- collaborative filtering
- support vector machine
- weight vector
- missing data
- importance sampling
- convergence rate
- convergence speed
- support vector
- image processing
- genetic algorithm
- machine learning