Abstract

This paper presents a globally and superlinearly convergent algorithm for solving one-dimensional constrained minimization problems involving (not necessarily smooth) convex functions. The constraint is handled by what can be interpreted as a new type of penalty method. The algorithm does not require the objective function to be evaluated at infeasible points and it does not use constraint function values at feasible points. The penalty parameter is automatically generated by the algorithm via linear approximation of the constraint function. As in the unconstrained case developed by Lemarechal and the author, the algorithm uses a step that is the shorter of a quadratic approximation step and a polyhedral approximation step. Here the latter is actually a “penalized” polyhedral step whose computation is well conditioned if the constraint satisfies a nondegeneracy assumption.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call