Abstract

In 1952, Hestenes and Stiefel introduced the conjugate gradient algorithm in their landmark paper [27] as an algorithm for solving linear equation Ax = b with A as positive definite n × n-matrix (see the book [26] of Hestenes for a broad exposition). The algorithm fascinated numerical analysts since then for various reasons: The cg-algorithm combines features of direct and iterative methods which attracted the attention in the early years: It generates a sequence x i of vectors approximating the solution $$\overline x$$ in a defined way like other iterative methods, but like direct methods, terminating with the exact solution after at most n steps, at least in theory. Many expectations were disappointed, when it was found out that due to roundoff the n-step termination property does not hold in practice. However, viewed as an iterative method, the cg-algorithm has very attractive features. Its application for the iterative solution of large sparse systems has been discussed very early [11] by Stiefel and his coworkers. Like other iterative methods, it essentially requires only the formation of one matrix-vector product A·x per iteration, so that the iterations are inexpensive even for large matrices A, if they are sparse. The iterative aspect of the method has been particularly emphasized since the work of Reid [38].

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.