Abstract

Two different approaches based on eigenvalues and singular values of the matrix representing the search direction in conjugate gradient algorithms are considered. Using a special approximation of the inverse Hessian of the objective function, which depends by a positive parameter, we get the search direction which satisfies both the sufficient descent condition and the Dai–Liao’s conjugacy condition. In the first approach the parameter in the search direction is determined by clustering the eigenvalues of the matrix defining it. The second approach uses the minimizing the condition number of the matrix representing the search direction. In this case the obtained conjugate gradient algorithm is exactly the three-term conjugate gradient algorithm proposed by Zhang, Zhou and Li. The global convergence of the algorithms is proved for uniformly convex functions. Intensive numerical experiments, using 800 unconstrained optimization test problems, prove that both these approaches have similar numerical performances. We prove that both algorithms are significantly more efficient and more robust than CG-DESCENT algorithm by Hager and Zhang. By solving five applications from the MINPACK-2 test problem collection, with variables, we show that the suggested conjugate gradient algorithms are top performer versus CG-DESCENT.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call