Abstract
We prove the global convergence of a two-parameter family of conjugate gradient methods that use a new and different formula of stepsize from Wu \cite% {14}. Numerical results are presented to confirm the effectiveness of the proposed stepsizes by comparing with the stepsizes suggested by Sun and his colleagues \cite% {2, 12}.\\
Highlights
In the implementation of any conjugate gradient (CG) method, the stepsize is often determined by certain line search conditions such as the Wolfe conditions [13]
These types of line search involve extensive computation of function values and gradients, which often becomes a significant burden for large-scale problems, which spurred Sun and Zhang [12] to pursue the conjugate gradient method where they calculated the stepsize instead of the line search according to the following formula αk = −δgkT dk/
We prove the global convergence of a two-parameter family of conjugate gradient methods
Summary
The present section gathers technical results concerning the stepsize αk generated by (1.8), which will be useful to derive the global convergence properties of the section. Assumption 2.1 The function f is LC1 and strongly convex in Rn, i.e, there exists constants τ > 0 and κ ≥ 0 such that. Lemma 2.2 Suppose that xk is given by (1.2), (1.3) and (1.8). GkT+1dk = ρkgkT dk, holds for all k, where. Corollary 2.4 Suppose that Assumption 2.1 holds. Proof [14] Lemma 3. Lemma 2.6 Suppose that Assumption 2.1 holds, we have α2k dk 2 < ∞
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.