Abstract

ASTRO-DF is a prominent trust-region method using adaptive sampling for stochastic derivative-free optimization of nonconvex problems. Its salient feature is an easy-to-understand-and-implement concept of maintaining “just enough” replications when evaluating points throughout the search to guarantee almost-sure convergence to a first-order critical point. To reduce the dependence of ASTRO-DF on the problem dimension and boost its performance in finite time, we present two key refinements, namely, (i) local models with diagonal Hessians constructed on interpolation points based on a coordinate basis and (ii) direct search using the interpolation points whenever possible. We demonstrate that the refinements in (i) and (ii) retain the convergence guarantees while matching existing results on iteration complexity. Uniquely, our iteration complexity results match the canonical rates without placing assumptions on iterative models’ quality and their independence from function estimates. Numerical experimentation on a testbed of problems and comparison against existing popular algorithms reveals the computational advantage of ASTRO-DF due to the proposed refinements.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call