Abstract

In this paper, a class of nonlinear constrained optimization problems with both inequality and equality constraints is discussed. Based on a simple and effective penalty parameter and the idea of primal-dual interior point methods, a QP-free algorithm for solving the discussed problems is presented. At each iteration, the algorithm needs to solve two or three reduced systems of linear equations with a common coefficient matrix, where a slightly new working set technique for judging the active set is used to construct the coefficient matrix, and the positive definiteness restriction on the Lagrangian Hessian estimate is relaxed. Under reasonable conditions, the proposed algorithm is globally and superlinearly convergent. During the numerical experiments, by modifying the technique in Section 5 of (SIAM J. Optim. 14(1): 173-199, 2003), we introduce a slightly new computation measure for the Lagrangian Hessian estimate based on second order derivative information, which can satisfy the associated assumptions. Then, the proposed algorithm is tested and compared on 59 typical test problems, which shows that the proposed algorithm is promising.

Highlights

  • In this paper, we consider nonlinear constrained optimization problems with inequality and equality constraints (P) min f (x), s.t. gi(x) =, i ∈ I ; gj(x) ≤, j ∈ Iı, ( )where I = {, . . . , m }, Iı = {m +, m +, . . . , m + mı}, the functions f and gj : Rn → R

  • We review briefly the study on the primal-dual interior point (PDIP) quadratic program (QP)-free algorithms associated with our work

  • By means of problem (Pρ ), we propose a PDIP-type algorithm for problem (P)

Read more

Summary

Introduction

We consider nonlinear constrained optimization problems with inequality and equality constraints (P) min f (x), s.t. gi(x) = , i ∈ I ; gj(x) ≤ , j ∈ Iı, ( ). In , Mayne and Polak [ ] proposed a simple scheme to convert (P) to a sequence of inequality smoothing constrained optimization (Pρ) min fρ(x) := f (x) – ρ gj(x), s.t. gj(x) ≤ , j ∈ I ∪ Iı, j∈I. With the help of inequality constrained non-smoothing optimization min f (x) + cj gj(x) , s.t. gj(x) ≤ , j ∈ I ∪ Iı, j∈I one can design an algorithm for solving the original problem (P), e.g., [ ], where cj > is the penalty parameter that needs to be updated

Objectives
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.