Abstract

This paper considers the regularization continuation method and the trust-region updating strategy for the optimization problem with linear equality constraints. The proposed method utilizes the linear conservation law of the regularization continuation method such that it does not need to compute the correction step for preserving the feasibility other than the previous continuation methods and the quasi-Newton updating formulas for the linearly constrained optimization problem. Moreover, the new method uses the special limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) formula as the preconditioning technique to improve its computational efficiency in the well-posed phase, and it uses the inverse of the regularized two-sided projection of the Lagrangian Hessian as the pre-conditioner to improve its robustness. Numerical results also show that the new method is more robust and faster than the traditional optimization method such as the alternating direction method of multipliers (ADMM), the sequential quadratic programming (SQP) method (the built-in subroutine fmincon.m of the MATLAB2020a environment), and the recent continuation method (Ptctr). The computational time of the new method is about 1/3 of that of SQP (fmincon.m). Finally, the global convergence analysis of the new method is also given.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call