Abstract

We here adapt an extended version of the adaptive cubic regularization method with dynamic inexact Hessian information for nonconvex optimization in Bellavia et al. [Adaptive cubic regularization methods with dynamic inexact hessian information and applications to finite-sum minimization. IMA Journal of Numerical Analysis. 2021;41(1):764–799] to the stochastic optimization setting. While exact function evaluations are still considered, this novel variant inherits the innovative use of adaptive accuracy requirements for Hessian approximations introduced in the just quoted paper and additionally employs inexact computations of the gradient. Without restrictions on the variance of the errors, we assume that these approximations are available within a sufficiently large, but fixed, probability and we extend, in the spirit of Cartis and Scheinberg [Global convergence rate analysis of unconstrained optimization methods based on probabilistic models. Math Program Ser A. 2018;159(2):337–375], the deterministic analysis of the framework to its stochastic counterpart, showing that the expected number of iterations to reach a first-order stationary point matches the well-known worst-case optimal complexity. This is, in fact, still given by , with respect to the first-order ϵ tolerance. Finally, numerical tests on nonconvex finite-sum minimization confirm that using inexact first- and second-order derivatives can be beneficial in terms of the computational savings.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call