Abstract

The augmented Lagrangian method (ALM) is a well-regarded algorithm for solving convex optimization problems with linear constraints. Recently, in He et al. [On full Jacobian decomposition of the augmented Lagrangian method for separable convex programming, SIAM J. Optim. 25(4) (2015), pp. 2274–2312], it has been demonstrated that a straightforward Jacobian decomposition of ALM is not necessarily convergent when the objective function is the sum of functions without coupled variables. Then, Wang et al. [A note on augmented Lagrangian-based parallel splitting method, Optim. Lett. 9 (2015), pp. 1199–1212] proved the global convergence of the augmented Lagrangian-based parallel splitting method under the assumption that all objective functions are strongly convex. In this paper, we extend these results and derive the worst-case convergence rate of this method under both ergodic and non-ergodic conditions, where t represents the number of iterations. Furthermore, we show that the convergence rate can be improved from to , and finally, we also demonstrate that this method can achieve global linear convergence, when the involved functions satisfy some additional conditions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call