Abstract

The conjugate gradient method can be applied in many fields, such as neural networks, image restoration, machine learning, deep learning, and many others. Polak–Ribiere–Polyak and Hestenses–Stiefel conjugate gradient methods are considered as the most efficient methods to solve nonlinear optimization problems. However, both methods cannot satisfy the descent property or global convergence property for general nonlinear functions. In this paper, we present two new modifications of the PRP method with restart conditions. The proposed conjugate gradient methods satisfy the global convergence property and descent property for general nonlinear functions. The numerical results show that the new modifications are more efficient than recent CG methods in terms of number of iterations, number of function evaluations, number of gradient evaluations, and CPU time.

Highlights

  • IntroductionWe consider the following form for the unconstrained optimization problem: min f (x)|x ∈ Rn ,

  • We consider the following form for the unconstrained optimization problem: min f (x)|x ∈ Rn, (1.1)where f : Rn → R is a continuously differentiable function and its gradient is denoted by g(x) = ∇f (x)

  • In 2006, Wei et al [16] gave a new positive CG method, which is quite similar to the original PRP method, which has global convergence under exact and inexact line search, that is, βkWYL

Read more

Summary

Introduction

We consider the following form for the unconstrained optimization problem: min f (x)|x ∈ Rn ,. Where f : Rn → R is a continuously differentiable function and its gradient is denoted by g(x) = ∇f (x). To solve (1.1) using the CG method, we use the following iterative method starting from the initial point x0 ∈ Rn. xk+1 = xk + αkdk, k = 0, 1, 2, . . Salleh et al Journal of Inequalities and Applications (2022) 2022:14 where αk > 0 is the step size obtained by some line search. To obtain the steplength αk, we have the following two line searches: 1. Exact line search f (xk + αkdk) = min f (xk + αdk), α ≥ 0. (1.4) is computationally expensive if the function has many local minima

Inexact line search
The global convergence properties Assumption 1
The global convergence properties of βkA1
The global convergence properties of βkA2
Conclusion
CG method

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.