Abstract

Conjugate gradient methods are conjugate direction or gradient deflection methods which lie somewhere between the method of steepest descent and Newton's method. Their principal advantage is that they do not require the storage of any matrices as in Newton's method, or as in quasi-Newton methods, and they are designed to converge faster than the method of steepest descent. Unlike quasi-Newton or variable-metric methods, these are fixed-metric methods in which the search direction at each iteration is based on an approximation to the inverse Hessian constructed by updating a fixed, symmetric, positive definite matrix, typically the identity matrix. The resulting approximation is usually not symmetric, although some variants force symmetry and hence derive memoryless quasi-Newton methods. In this paper, we present a scaled modified version of the conjugate gradient method suggested by Perry, which employs the quasi-Newton condition rather than conjugacy under inexact line searches, in order to derive the search directions. The analysis is extended to the memoryless quasi-Newton modification of this method, as suggested by Shanno. Computational experience on standard test problems indicates that the proposed method, along with Beale and Powell's restarts, improves upon existing conjugate gradient strategies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call