Abstract

The conjugate gradient (CG) method is a useful tool for obtaining the optimum point for unconstrained optimization problems since it does not require a second derivative or its approximations. Moreover, the conjugate gradient method can be applied in many fields such as machine learning, deep learning, neural network, and many others. This paper constructs a four-term conjugate gradient method that satisfies the descent property and convergence properties to obtain the stationary point. The new modification was constructed based on Liu and Storey's conjugate gradient method, two-term conjugate gradient method, and three-term conjugate gradient method. To analyze the efficiency and robustness, we used more than 150 optimization functions from the CUTEst library with different dimensions and shapes. The numerical results show that the new modification outperforms the recent conjugate gradient methods such as CG-Descent, Dai and Liao, and others in terms of number of functions evaluations, number of gradient evaluations, number of iterations, and CPU time.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call