Abstract

In this work, we study the task of distributed optimization over a network of learners in which each learner possesses a convex cost function, a set of affine equality constraints, and a set of convex inequality constraints. We propose a fully-distributed adaptive diffusion algorithm based on penalty methods that allows the network to cooperatively optimize the global cost function, which is defined as the sum of the individual costs over the network, subject to all constraints. We show that when small constant step-sizes are employed, the expected distance between the optimal solution vector and that obtained at each node in the network can be made arbitrarily small. Two distinguishing features of the proposed solution relative to other related approaches is that the developed strategy does not require the use of projections and is able to adapt to and track drifts in the location of the minimizer due to changes in the constraints or in the aggregate cost itself. The proposed strategy is also able to cope with changing network topology, is robust to network disruptions, and does not require global information or rely on central processors.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.