Abstract

A new modified three-term conjugate gradient (CG) method is shown for solving the large scale optimization problems. The idea relates to the famous Polak-Ribière-Polyak (PRP) formula. As the numerator of PRP plays a vital role in numerical result and not having the jamming issue, PRP method is not globally convergent. So, for the new three-term CG method, the idea is to use the PRP numerator and combine it with any good CG formula’s denominator that performs well. The new modification of three-term CG method possesses the sufficient descent condition independent of any line search. The novelty is that by using the Wolfe Powell line search the new modification possesses global convergence properties with convex and nonconvex functions. Numerical computation with the Wolfe Powell line search by using the standard test function of optimization shows the efficiency and robustness of the new modification.

Highlights

  • The conjugate gradient method is an efficient and organized tool for solving the large-scale nonlinear optimization problem, due to its simplicity, easiness, and low memory requirements

  • In this part we compare the numerical results of proposed three-term BZAU (Bakhtawar, Zabidin, Ahmad and Ummu) method with recently developed TMPRP1 method and compare their performance

  • Dantzig (1914–2005) said the final test of a theory is its capacity to solve the problems which originated it. This is one of the main reasons we select the large-scale unconstrained optimization problems to test the theoretical progress in numerical form through mathematical programming [38]

Read more

Summary

Introduction

The conjugate gradient method is an efficient and organized tool for solving the large-scale nonlinear optimization problem, due to its simplicity, easiness, and low memory requirements. The papers by Beale [15], McGuire and Wolfe [16], Nazareth [17], Deng and Li [18], Dai and Yuan [19], Zhang et al [20, 21], Cheng [22], Zhang et al [23], Al-Bayati and Sharif [24], Narushima et al [25], Andrei [26,27,28], Sugiki et al [29], Al-Baali et al [30], Babaie-Kafaki and Ghanbari [31], and Sun and Liu [32] presented different types of three-term conjugate gradient method along with their numerical performance and efficiency and proved their global convergence properties. Gk βkMPRPdk−1, if k ≥ 1, where βkMPRP = gkT(gk − gk−1)/(μ|gkTdk−1| + ‖gk−1‖2) or βkMPRP+ = max {gkT(gk − gk−1)/(μ|gkTdk−1| + ‖gk−1‖2), 0} This method has attractive property of satisfying the sufficient descent condition gkTdk = −‖gk‖2 independent of any line search and attains global convergence if standard Wolfe line is used.

Motivation and Formula
Global Convergence of Modified Three Term
Numerical Results
A Quadratic Function
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call