Abstract

The goal of this paper is to investigate an approach for derivative-free optimization that has not received sufficient attention in the literature and is yet one of the simplest to implement and parallelize. In its simplest form, it consists of employing derivative-based methods for unconstrained or constrained optimization and replacing the gradient of the objective (and constraints) by finite-difference approximations. This approach is applicable to problems with or without noise in the functions. The differencing interval is determined by a bound on the second (or third) derivative and by the noise level, which is assumed to be known or to be accessible through difference tables or sampling. The use of finite-difference gradient approximations has been largely dismissed in the derivative-free optimization literature as too expensive in terms of function evaluations or as impractical in the presence of noise. However, the test results presented in this paper suggest that it has much to be recommended. The experiments compare newuoa, dfo-ls and cobyla against finite-difference versions of l-bfgs, lmder and knitro on three classes of problems: general unconstrained problems, nonlinear least squares problems and nonlinear programs with inequality constraints.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call