Abstract

In this paper, we suggest a new conjugate gradient algorithm that for all k≥0 both the descent and the conjugacy conditions are guaranteed. The search direction is selected as a linear combination of−g k+1 and s k , where g k+1=∇ f(x k+1), s k =x k+1−x k and the coefficients in this linear combination are selected in such a way that both the descent and the conjugacy conditions are satisfied at every iteration. In order to define the algorithm and to prove its convergence, the modified Wolfe line search is introduced, in which the parameter in the standard second Wolfe condition is changed at every iteration. It is shown that for general nonlinear functions, the algorithm with modified Wolfe line search generates directions bounded away from infinity. The algorithm uses an acceleration scheme modifying the step length α k in such a manner as to improve the reduction of the function values along the iterations. Numerical comparisons with some conjugate gradient algorithms using a set of 750 unconstrained optimization problems, some of them from the CUTE library, show that the computational scheme outperforms the known conjugate gradient algorithms like Hestenes and Stiefel, Polak et al., Dai and Yuan or hybrid Dai and Yuan, as well as the CG_DESCENT method by Hager and Zhang with the Wolfe line search conditions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call