Abstract

We propose a trust region algorithm for solving nonconvex smooth optimization problems. For any $$\overline{\epsilon }\in (0,\infty )$$∈¯?(0,?), the algorithm requires at most $$\mathcal{O}(\epsilon ^{-3/2})$$O(∈-3/2) iterations, function evaluations, and derivative evaluations to drive the norm of the gradient of the objective function below any $$\epsilon \in (0,\overline{\epsilon }]$$∈?(0,∈¯]. This improves upon the $$\mathcal{O}(\epsilon ^{-2})$$O(∈-2) bound known to hold for some other trust region algorithms and matches the $$\mathcal{O}(\epsilon ^{-3/2})$$O(∈-3/2) bound for the recently proposed Adaptive Regularisation framework using Cubics, also known as the arc algorithm. Our algorithm, entitled trace, follows a trust region framework, but employs modified step acceptance criteria and a novel trust region update mechanism that allow the algorithm to achieve such a worst-case global complexity bound. Importantly, we prove that our algorithm also attains global and fast local convergence guarantees under similar assumptions as for other trust region algorithms. We also prove a worst-case upper bound on the number of iterations, function evaluations, and derivative evaluations that the algorithm requires to obtain an approximate second-order stationary point.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.