Abstract

The paper describes new conjugate gradient algorithms which use preconditioning. The algorithms are intended for general nonlinear unconstrained problems. In order to speed up the convergence the algorithms employ scaling matrices which transform the space of original variables into the space in which Hessian matrices of functionals describing the problems have more clustered eigenvalues. This is done efficiently by applying BFGS or limited memory BFGS updating matrices. Once the scaling matrix is calculated, the next few iterations of the conjugate gradient algorithms are performed in the transformed space. The unique feature of these algorithms is the application of the reduced-Hessian approach to evaluate directions of descent and the use of column scaling to improve the conditioning. We believe that the proposed algorithms are competitive to limited memory quasi-Newton, or to other preconditioned conjugate gradient algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call