Abstract

The augmented Lagrangian method (ALM) is classic for canonical convex programming problems with linear constraints, and it finds many applications in various scientific computing areas. A major advantage of the ALM is that the step for updating the dual variable can be further relaxed with a step size in ( 0 , 2 ) (0,2) , and this advantage can easily lead to numerical acceleration for the ALM. When a separable convex programming problem is discussed and a corresponding splitting version of the classic ALM is considered, convergence may not be guaranteed and thus it is seemingly impossible that a step size in ( 0 , 2 ) (0,2) can be carried on to the relaxation step for updating the dual variable. We show that for a parallel splitting version of the ALM, a step size in ( 0 , 2 ) (0,2) can be maintained for further relaxing both the primal and dual variables if the relaxation step is simply corrected by a rank-two matrix. Hence, a rank-two relaxed parallel splitting version of the ALM with a step size in ( 0 , 2 ) (0,2) is proposed for separable convex programming problems. We validate that the new algorithm can numerically outperform existing algorithms of the same kind significantly by testing some applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call