Abstract

We study the local convergence of classical quasi-Newton methods for nonlinear optimization. Although it was well established a long time ago that asymptotically these methods converge superlinearly, the corresponding rates of convergence still remain unknown. In this paper, we address this problem. We obtain first explicit non-asymptotic rates of superlinear convergence for the standard quasi-Newton methods, which are based on the updating formulas from the convex Broyden class. In particular, for the well-known DFP and BFGS methods, we obtain the rates of the form (frac{n L^2}{mu ^2 k})^{k/2} and (frac{n L}{mu k})^{k/2} respectively, where k is the iteration counter, n is the dimension of the problem, mu is the strong convexity parameter, and L is the Lipschitz constant of the gradient.

Highlights

  • Motivation In this work, we investigate the classical quasi-Newton algorithms for smooth unconstrained optimization, the main examples of which are the Davidon–Fletcher–Powell (DFP) method [1,2] and the Broyden–Fletcher–Goldfarb–Shanno (BFGS) method [3,4,5,6,7]

  • We study the local convergence of classical quasi-Newton methods for nonlinear optimization

  • We obtain first explicit nonasymptotic rates of superlinear convergence for the standard quasi-Newton methods, which are based on the updating formulas from the convex Broyden class

Read more

Summary

Introduction

Motivation In this work, we investigate the classical quasi-Newton algorithms for smooth unconstrained optimization, the main examples of which are the Davidon–. Soon after that Broyden, Dennis and Moré [18] considered the quasi-Newton algorithms without line search and proved the local superlinear convergence of DFP, BFGS and several other methods. Their analysis was based on the Frobenius-norm potential function. 3, we analyze the standard quasi-Newton scheme, based on the updating rules from the convex Broyden class, as applied to minimizing a quadratic function We show that this scheme has the same rate of linear convergence as that of the classical gradient method, and a superlinear convergence rate of.

Convex Broyden class
Unconstrained quadratic minimization
Minimization of general functions
L 16L 5L 5
Discussion
Note that the function t
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call