Abstract

The augmented Lagrangian method (ALM) is a well-regarded algorithm for solving convex optimization problems with linear constraints. Recently, in He et al. [On full Jacobian decomposition of the augmented Lagrangian method for separable convex programming, SIAM J. Optim. 25(4) (2015), pp. 2274–2312], it has been demonstrated that a straightforward Jacobian decomposition of ALM is not necessarily convergent when the objective function is the sum of functions without coupled variables. Then, Wang et al. [A note on augmented Lagrangian-based parallel splitting method, Optim. Lett. 9 (2015), pp. 1199–1212] proved the global convergence of the augmented Lagrangian-based parallel splitting method under the assumption that all objective functions are strongly convex. In this paper, we extend these results and derive the worst-case convergence rate of this method under both ergodic and non-ergodic conditions, where t represents the number of iterations. Furthermore, we show that the convergence rate can be improved from to , and finally, we also demonstrate that this method can achieve global linear convergence, when the involved functions satisfy some additional conditions.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.