AbstractThe solution of linear systems has considerable importance for the computation of problems resulting from engineering, physics, chemistry, computer science, mathematics, medicine and economics. The calculation of costly and time‐consuming problems, e.g. crash tests, simulation of the human lung and skin, calculation of electrical and magnetical fields, thermal analysis and fluid dynamics, to name only a few, has become possible with the recent developments of advanced computer architectures and iterative solvers. Generalized conjugate gradient (CG) methods are the most important iterative solvers because they converge very quickly under certain conditions. Therefore they are widely used and in a rapid further development.The purpose of this paper is to present new results for the convergence of generalized CG methods. A convergence result for non‐symmetric and non‐positive definite matrices is given that includes the classical theory for symmetric, positive definite matrices as a special case.The norm of the residuals resulting from CG methods may oscillate heavily. Different remedies for smoothing this sequence have been proposed, for example by van der Vorst. Schönauer introduced in the 1980s a smoothing algorithm to get a norm nonincreasing function of the iteration index. For this algorithm a complete theoretical analysis is given. A surprising result is obtained showing that the smoothing algorithm is in a sense optimal. Convergence estimates are derived therefrom. A geometric interpretation of the smoothing algorithm is given showing the propagation of the errors.It should be stressed that a smooth convergence of the residuals is not equivalent to a smooth convergence of the errors which is the proper aim. A class of error minimizing methods can be easily derived from the theory.
Read full abstract