Login / Signup
Escaping Saddle Points with Bias-Variance Reduced Local Perturbed SGD for Communication Efficient Nonconvex Distributed Learning.
Tomoya Murata
Taiji Suzuki
Published in:
NeurIPS (2022)
Keyphrases
</>
distributed learning
saddle points
bias variance
trade off
convex optimization
machine learning
image processing
learning algorithm
image segmentation
training data
objective function
critical points