Abstract

Failure to satisfy Constraint Qualifications (CQs) leads to serious convergence difficulties for state-of-the-art Nonlinear Programming (NLP) solvers. Since this failure is often overlooked by practitioners, a strategy to enhance the robustness properties for problems without CQs is vital. Inspired by penalty merit functions and barrier-like strategies, we propose and implement a combination of both in Ipopt. This strategy has the advantage of consistently satisfying the Linear Independence Constraint Qualification (LICQ) for an augmented problem, readily enabling regular step computations within the interior-point framework. Additionally, an update rule inspired by the work of Byrd et al. (2012) is implemented, which provides a dynamic increase of the penalty parameter as stationary points are approached. Extensive test results show favorable performance and robustness increases for our ℓ1 —penalty strategies, when compared to the regular version of Ipopt. Moreover, a dynamic optimization problem with nonsmooth dynamics formulated as a Mathematical Program with Complementarity Constraints (MPCC) was solved in a single optimization stage without additional reformulation. Thus, this ℓ1— strategy has proved useful for a broad class of degenerate NLPs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call