Abstract

In this paper, we study greedy variants of quasi-Newton methods. They are based on the updating formulas from a certain subclass of the Broyden family. In particular, this subclass includes the well-known DFP, BFGS, and SR1 updates. However, in contrast to the classical quasi-Newton methods, which use the difference of successive iterates for updating the Hessian approximations, our methods apply basis vectors, greedily selected so as to maximize a certain measure of progress. For greedy quasi-Newton methods, we establish an explicit nonasymptotic bound on their rate of local superlinear convergence, as applied to minimizing strongly convex and strongly self-concordant functions (and, in particular, to strongly convex functions with Lipschitz continuous Hessian). The established superlinear convergence rate contains a contraction factor, which depends on the square of the iteration counter. We also show that greedy quasi-Newton methods produce Hessian approximations whose deviation from the exact Hessians linearly converges to zero.

Highlights

  • We propose new quasi-Newton methods, which are based on the updating formulas from a certain subclass of the Broyden family [3]

  • In contrast to the classical quasi-Newton methods, which use the difference of successive iterates for updating the Hessian approximations, our methods apply basis vectors, greedily selected to maximize a certain measure of progress

  • We have presented the greedy quasi-Newton methods, that are based on the updating formulas from the Broyden family and use greedily selected basis vectors for updating Hessian approximations

Read more

Summary

Introduction

Quasi-Newton methods have a reputation of the most efficient numerical schemes for smooth unconstrained optimization. The main idea of these algorithms is to approximate the standard Newton method by replacing the exact Hessian with some approximation, which is updated between iterations according to special formulas. There exist numerous variants of quasi-Newton algorithms that differ mainly in the rules of updating Hessian approximations. The three most popular are the Davidon–Fletcher–Powell (DFP) method [1, 2], the Broyden–Fletcher–Goldfarb– Shanno (BFGS) method [6,7,8,9,10], and the Symmetric Rank 1 (SR1) method [1,3]. For a general overview of the topic, see [14] and [25, Ch. 6]; see [28] for the application of quasi-Newton methods for non-smooth optimization

Objectives
Findings
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.