Abstract

This paper studies distributed optimization problems where a network of agents seeks to minimize the sum of their private cost functions. We propose new algorithms based on the distributed subgradient method and the finite-time consensus protocol introduced by Sundaram and Hadjicostis (2007). In our first algorithm, the local optimization variables are updated cyclically through a subgradient step while the consensus variables follow a usual consensus protocol periodically interrupted by a predictive consensus estimate reset operation. For convex cost functions with bounded subgradients, this algorithm is guaranteed to converge to a certain range of the optimal value if using a constant step size or to the optimal value if a diminishing step size is in place. For differentiable cost functions whose sum is convex and has a Lipschitz continuous gradient, convergence to the optimal value can be ensured when using a constant step size, even if some of the individual cost functions are nonconvex. In addition, exponential convergence to the optimal solution is achieved when the global cost function is further assumed to be strongly convex. In these cases, the local optimization variables reach consensus in finite time, and then behave as they would under the centralized subgradient method applied to the global problem, except on a slower time scale. The second algorithm is specialized for the case of quadratic cost functions and converges in finite time to the optimal solution. Simulation examples are given to illustrate the algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call