Abstract

ASTRO-DF is a prominent trust-region method using adaptive sampling for stochastic derivative-free optimization of nonconvex problems. Its salient feature is an easy-to-understand-and-implement concept of maintaining “just enough” replications when evaluating points throughout the search to guarantee almost-sure convergence to a first-order critical point. To reduce the dependence of ASTRO-DF on the problem dimension and boost its performance in finite time, we present two key refinements, namely: (i) local models with diagonal Hessians constructed on interpolation points based on a coordinate basis; and (ii) direct search using the interpolation points whenever possible. We demonstrate that the refinements in (i) and (ii) retain the convergence guarantees while matching existing results on iteration complexity. Uniquely, our iteration complexity results match the canonical rates without placing assumptions on iterative models’ quality and their independence from function estimates. Numerical experimentation on a testbed of problems and comparison against existing popular algorithms reveals the computational advantage of ASTRO-DF due to the proposed refinements.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.