Abstract

This paper deals with regularized Newton methods, a flexible class of unconstrained optimization algorithms that is competitive with line search and trust region methods and potentially combines attractive elements of both. The particular focus is on combining regularization with limited memory quasi-Newton methods by exploiting the special structure of limited memory algorithms. Global convergence of regularization methods is shown under mild assumptions and the details of regularized limited memory quasi-Newton updates are discussed including their compact representations. Numerical results using all large-scale test problems from the CUTEst collection indicate that our regularized version of L-BFGS is competitive with state-of-the-art line search and trust-region L-BFGS algorithms and previous attempts at combining L-BFGS with regularization, while potentially outperforming some of them, especially when nonmonotonicity is involved.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.