Abstract
This paper further studies the WYL conjugate gradient (CG) formula with βkWYL≥0 and presents a three-term WYL CG algorithm, which has the sufficiently descent property without any conditions. The global convergence and the linear convergence are proved; moreover the n-step quadratic convergence with a restart strategy is established if the initial step length is appropriately chosen. Numerical experiments for large-scale problems including the normal unconstrained optimization problems and the engineer problems (Benchmark Problems) show that the new algorithm is competitive with the other similar CG algorithms.
Highlights
Consider the following minimization optimizations modelling: min f (x), x∈Rn (1)where f(x) : Rn → R is a continuously differentiable function
We show that the new conjugate gradient (CG) algorithm has global convergence for general functions and has the n-step quadratic convergence for uniformly convex functions with r-step restart and standard Armijo line search under appropriate conditions
This paper focuses on a modified WYL CG algorithm with restart technique for large-scale optimization
Summary
Consider the following minimization optimizations modelling: min f (x) , x∈Rn (1). where f(x) : Rn → R is a continuously differentiable function. If a restart strategy is used, the PRP algorithm is n-step quadratic convergence (see [29,30,31]). Proved that a three-term CG algorithm has quadratic convergence with a restart strategy under some inexact line searches and the suitable assumptions. By restricting the parameter ς2 < 1/4 under the strong Wolfe-Powell linear search, the WYL algorithm can meet the sufficiently descent property. We show that the new CG algorithm has global convergence for general functions and has the n-step quadratic convergence for uniformly convex functions with r-step restart and standard Armijo line search under appropriate conditions.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have