Abstract

Many optimization problems are characterized by expensive objective and/or constraint function evaluations paired with a lack of derivative information. Direct search methods such as generating set search (GSS) are well understood and efficient for derivative-free optimization of unconstrained and linearly constrained problems. This paper presents a study of heuristic algorithms that address the more difficult problem of general nonlinear constraints where derivatives for objective or constraint functions are unavailable. We focus on penalty methods that use GSS to solve a sequence of linearly constrained problems, numerically comparing different penalty functions. A classical choice for penalizing constraint violations is � 2 , the squared � 2 norm, which has advantages for derivative-based optimization methods. In our numerical tests, however, we show that exact penalty functions based on the � 1, � 2 ,a nd� ∞ norms converge to good approximate solutions more quickly and thus are attractive alternatives. Unfortunately, exact penalty functions are nondifferentiable and consequently degrade the final solution accuracy, so we also consider smoothed variants. Smoothed-exact penalty functions are attractive because they retain the differentiability of the original problem. Numerically, they are a compromise between exact and � 2 , i.e., they converge to a good solution somewhat quickly without sacrificing much solution accuracy. Moreover, the smoothing is parameterized and can potentially be adjusted to balance the two considerations. Since our

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call