Abstract

An adaptive regularization algorithm using inexact function and derivatives evaluations is proposed for the solution of composite nonsmooth nonconvex optimization. It is shown that this algorithm needs at most $$O(|\log (\epsilon )|\,\epsilon ^{-2})$$ evaluations of the problem’s functions and their derivatives for finding an $$\epsilon $$ -approximate first-order stationary point. This complexity bound therefore generalizes that provided by Bellavia et al. (Theoretical study of an adaptive cubic regularization method with dynamic inexact Hessian information. arXiv:1808.06239 , 2018) for inexact methods for smooth nonconvex problems, and is within a factor $$|\log (\epsilon )|$$ of the optimal bound known for smooth and nonsmooth nonconvex minimization with exact evaluations. A practically more restrictive variant of the algorithm with worst-case complexity $$O(|\log (\epsilon )|+\epsilon ^{-2})$$ is also presented.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call