A novel bound on the convergence rate of ADMM for distributed optimization.
Nicola BastianelloLuca SchenatoRuggero CarliPublished in: Autom. (2022)
Keyphrases
- convergence rate
- global convergence
- faster convergence rate
- convergence speed
- step size
- line search
- stopping criterion
- learning rate
- number of iterations required
- gradient method
- alternating direction method of multipliers
- mutation operator
- faster convergence
- primal dual
- optimization algorithm
- convex optimization
- constrained optimization
- numerical stability
- optimization problems
- lower bound
- augmented lagrangian method
- variable step size
- wavelet neural network
- global search
- optimization methods
- evolution strategy
- gravitational search algorithm
- optimization method
- non stationary
- worst case
- evolutionary algorithm