Abstract

A new class of algorithms for unconstrained optimization has recently been proposed by Davidon [Conic Approximations and Collinear Scalings for Optimers, SIAM J. Num. Anal., to appear.]. This new method called “optimization by collinear scaling” is derived here as a natural extension of existing quasi-Newton methods. The derivation is based upon constructing a collinear scaling of the variables so that a local quadratic model can interpolate both function and gradient values of the transformed objective function at the latest two iterates. Deviation of the function values from quadratic behavior as well as gradient information influences the updating process. A particular member of this algorithm class is shown to have a Q-superlinear rate of convergence under standard assumptions on the objective function. The amount of computation required per update is essentially the same as for existing quasi-Newton methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call