Abstract

This paper addresses a class of constrained optimization problems over networks in which local cost functions and constraints can be nonconvex. We propose an asynchronous distributed optimization algorithm, relying on the centralized Method of Multipliers, in which each node wakes up in an uncoordinated fashion and performs either a descent step on a local Augmented Lagrangian or an ascent step on the local multiplier vector. These two phases are regulated by a distributed logic-AND, which allows nodes to understand when the descent on the (whole) Augmented Lagrangian is sufficiently small. We show that this distributed algorithm is equivalent to a block coordinate descent algorithm for the minimization of the Augmented Lagrangian followed by an update of the whole multiplier vector. Thus, the proposed algorithm inherits the convergence properties of the Method of Multipliers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call