Abstract

The conjugate gradient method is a useful method to solve large-scale unconstrained optimisation problems and to be used in some applications in several fields such as engineering, medical science, image restorations, neural network, and many others. The main benefit of the conjugate gradient method is not using the second derivative or its approximation, such as Newton’s method or its approximation. Moreover, the algorithm of the conjugate gradient method is simple and easy to apply. This study proposes a new modified conjugate gradient method that contains four terms depending on popular two- and three-term conjugate gradient methods. The new algorithm satisfies the descent condition. In addition, the new CG algorithm possesses the convergence property. In the numerical results part, we compare the new algorithm with famous methods such as CG-Descent. We conclude from numerical results that the new algorithm is more efficient than other popular CG methods such as CG-Descent 6.8 in terms of number of function evaluations, number of gradient evaluations, number of iterations, and CPU time.

Highlights

  • To solve large-scale unconstrained optimisation problems, we prefer to use conjugate gradient (CG) since it is efficient and robust and does not use the second derivative

  • A comparison with other popular and strong CG coefficients is employed. e comparison includes the CG-Descent 6.8 and DL + CG formula based on the CPU time, number of iterations, number of function evaluations, and t number of gradient evaluations

  • We use approximate weak Wolfe–Powell (WWP) line search as mentioned by the authors. e results of FTCGHS and DL + CG methods are obtained by running the modified code of CG-Descent; the code can be obtained from the Hager web page: https://people.clas.ufl.edu/hager/ software/

Read more

Summary

Introduction

To solve large-scale unconstrained optimisation problems, we prefer to use conjugate gradient (CG) since it is efficient and robust and does not use the second derivative. Powell [9] proposed an example to show that a function does not satisfy the convergence even if the exact line search is employed with the PRP formula. By using equations (2) and (12), Dai and Liao[16] proposed the following CG formula: βDk L gTk yk− 1 dTk− 1yk− 1. Based on equation (13), Babaie-Kafaki and Ghanbari [21] proposed the following CG method: 1, t ⎛⎝max⎛⎝ς, 1 − s Tky− k1−y1k − 21⎞⎠⎞⎠,. To avoid using the condition (gTk dk− 1/dTk− 1gk− 1) > ] ∈ (0, 1), Liu et al [24] constructed the following three-term CG method: dk − gk +⎛⎜⎝βLkS −. Based on the SWP line search, Yao et al [25] selected tk to satisfy the descent condition as follows:. The CG method can be applied in several fields: neural network, image restoration, medical science, machine learning, finance and economics, and many other fields. e reader can refer to [26,27,28,29,30,31,32,33,34,35,36] for more about the CG method and its applications

The New Search Direction
Numerical Results and Discussion
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call