Privacy Loss of Noisy Stochastic Gradient Descent Might Converge Even for Non-Convex Losses.
Shahab AsoodehMario DíazPublished in: CoRR (2023)
Keyphrases
- stochastic gradient descent
- matrix factorization
- least squares
- step size
- loss function
- random forests
- support vector machine
- convex hull
- convex optimization
- regularization parameter
- multiple kernel learning
- weight vector
- online algorithms
- missing data
- pairwise
- markov random field
- cost function
- machine learning
- high resolution
- recommender systems
- incomplete data
- importance sampling
- decision trees