Login / Signup
Improved Convergence Rate for a Distributed Two-Time-Scale Gradient Method under Random Quantization.
Marcos M. Vasconcelos
Thinh T. Doan
Urbashi Mitra
Published in:
CoRR (2021)
Keyphrases
</>
gradient method
convergence rate
step size
convergence speed
learning rate
faster convergence rate
neural network
multiscale
number of iterations required
optimization methods
information retrieval
metaheuristic
negative matrix factorization
wavelet neural network
recursive least squares