Abstract
This paper studies the convergence properties of algorithms belonging to the class of self-scaling (SS) quasi-Newton methods for unconstrained optimization. This class depends on two parameters, say t_k and τ_k, for which the choice τ_k=1 gives the Broyden family of unscaled methods, where t_k=1 corresponds to the well known DFP method. We propose simple conditions on these parameters that give rise to global convergence with inexact line searches, for convex objective functions. The q-superlinear convergence is achieved if further restrictions on the scaling parameter are introduced. These convergence results are an extension of the known results for the unscaled methods. Because the scaling parameter is heavily restricted, we consider a subclass of SS methods which satisfies the required conditions. Although convergence for the unscaled methods with t_k ≥ 1 is still an open question, we show that the global and superlinear convergence for SS methods is possible and present, in particular, a new SS-DFP method.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have