Abstract
Conjugate gradient methods are typically used to solve large scale unconstrained optimization problems. Recently, Hager and Zhang (2006) proposed two guaranteed descent conjugate gradient methods. In this paper, following Hager and Zhang (2006), we will use the modified secant condition given by Zhang et al.(1999) to present two new descent conjugate gradient methods. An interesting feature of these new methods is that they take both the gradient and function value information. Under some suitable assumptions, global convergence properties for these methods are established. Numerical comparisons with the Hager-Zhang methods are given.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have