Abstract

Following the scaled conjugate gradient methods proposed by Andrei, we hybridize the memoryless BFGS preconditioned conjugate gradient method suggested by Shanno and the spectral conjugate gradient method suggested by Birgin and Martínez based on a modified secant equation suggested by Yuan, and propose two modified scaled conjugate gradient methods. The interesting features of our methods are applying the function values in addition to the gradient values and satisfying the sufficient descent condition for the generated search directions which leads to the global convergence for uniformly convex functions. Numerical comparisons between the implementations of one of our methods which generates descent search directions for general functions and an efficient scaled conjugate gradient method proposed by Andrei are made on a set of unconstrained optimization test problems from the CUTEr collection, using the performance profile introduced by Dolan and Moré.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call