Abstract

Conjugate gradient methods are widely used for large scale unconstrained optimization problems. Most of conjugate gradient methods don’t always generate a descent search direction, so the descent condition is usually assumed in the analysis and implementations. Dai and Yuan (1999) proposed a conjugate gradient method which generates a descent search direction at every iteration and converges globally to the solution if the Wolfe conditions are satisfied within the line search strategy. In this paper, we give a new conjugate gradient method based on the study of Dai and Yuan, and show that our method always produces a descent search direction and converges globally if the Wolfe conditions are satisfied. Moreover our method has the second-order curvature information with a higher precision which uses the modified secant condition proposed by Zhang, Deng and Chen (1999) and Zhang and Xu (2001). Our numerical results show that our method is very efficient for given standard test problems, if we make a good choice of a parameter included in our method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call