Abstract
Conjugate Gradient (CG) method is a technique used in solving nonlinear unconstrained optimization problems. In this paper, we analysed the performance of two modifications and compared the results with the classical conjugate gradient methods of. These proposed methods possesse global convergence properties for general functions using exact line search. Numerical experiments show that the two modifications are more efficient for the test problems compared to classical CG coefficients.
Highlights
IntroductionConsider the following nonlinear unconstrained optimization problems min{f(x) ∶ xεRn},
Consider the following nonlinear unconstrained optimization problems min{f(x) ∶ xεRn}, (1)wheref: Rn → R is a continuous differentiable function that is bounded below
In this paper, we analyszed the performance of two modifications of Conjugate Gradient (CG) coefficients and compared the perfomance with that of classical CG methods of Fletcher and Reeves (FR) and PRP under exact line search
Summary
Consider the following nonlinear unconstrained optimization problems min{f(x) ∶ xεRn},. Ribiere-Polyak (PRP) (Polak and Ribiere, 1969; Polyak, 1969) Hestenes-Steifel (HS) (Hestenes and Stiefel,1952) Conjugate Descent (CD) (Fletcher, 1980), Liu-Storey (LS) method (Liu and Storey, 1991) These parameters and are defined as βkFR. In this paper, we analyszed the performance of two modifications of CG coefficients and compared the perfomance with that of classical CG methods of FR and PRP under exact line search. This is done to improve the overall performance of the resulting algorithms
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have