Abstract

Two modified three-term type conjugate gradient algorithms which satisfy both the descent condition and the Dai-Liao type conjugacy condition are presented for unconstrained optimization. The first algorithm is a modification of the Hager and Zhang type algorithm in such a way that the search direction is descent and satisfies Dai-Liao’s type conjugacy condition. The second simple three-term type conjugate gradient method can generate sufficient decent directions at every iteration; moreover, this property is independent of the steplength line search. Also, the algorithms could be considered as a modification of the MBFGS method, but with differentzk. Under some mild conditions, the given methods are global convergence, which is independent of the Wolfe line search for general functions. The numerical experiments show that the proposed methods are very robust and efficient.

Highlights

  • We will consider the following optimization problem: min f (x), x ∈ Rn, (1)where f(x) : Rn → R is continuously differentiable function whose gradient is ∇f(x), and ∇f(x) is denoted by g(x).Conjugate gradient method is very efficient for large-scale optimization problems

  • We present two modified simple three-term type conjugate gradient methods which are obtained by a modified BFGS (MBFGS) updating scheme of the inverse approximation of the Hessian of the function f(x) restart as the identity matrix at every step

  • On the one hand, we improve a three-term type conjugate gradient method which is obtained by a MBFGS

Read more

Summary

Introduction

Where f(x) : Rn → R is continuously differentiable function whose gradient is ∇f(x), and ∇f(x) is denoted by g(x). Conjugate gradient method is very efficient for large-scale optimization problems. This method generates a sequence of iterations: xk+1 = xk + αkdk, k = 0, 1, . The line search in conjugate gradient algorithms is often based on the following Wolfe conditions:. Some conjugate gradient methods include the Fletcher-. Yuan method (DY) [2,3,4,5,6,7,8]. In these methods, the difference is parameter βk; the parameters βk of these methods are specified as follows: βkFR. Throughout this paper, we always use ‖ ⋅ ‖ to mean the Euclidean norm

Motivation
Three-Term Conjugate Gradient Method and Its Global Convergence
Numerical Experiments
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call