Abstract

The multi-agent distributed consensus optimization problem arises in many engineering applications. Recently, the alternating direction method of multipliers (ADMM) has been applied to distributed consensus optimization which, referred to as the consensus ADMM (C-ADMM), can converge much faster than conventional consensus subgradient methods. However, C-ADMM can be computationally expensive when the cost function to optimize has a complicated structure or when the problem dimension is large. In this paper, we propose an inexact C-ADMM (IC-ADMM) where each agent only performs one proximal gradient (PG) update at each iteration. The PGs are often easy to obtain especially for structured sparse optimization problems. Convergence conditions for IC-ADMM are analyzed. Numerical results based on a sparse logistic regression problem show that IC-ADMM, though converges slower than the original C-ADMM, has a considerably reduced computational complexity.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.