Abstract

This paper considers the convergence rate of an iterative numerical scheme as a method for accelerating at the post-processor stage. The methodology adapted here is: (1) residual eigenmodes included in the origin of the convex hull are eliminated; (2) remaining residual terms are smoothed away by the main convergence algorithm. For this purpose, the polynomial matrix approach is employed for deriving the characteristic equation by two different methods. The first method is based on vector scaling and the second is based on the normal equations approach. The input for both methods is the solution difference between two consecutive iteration/cycle levels obtained from the main program. The singular value decomposition was employed for both methods due to the ill-conditioned structure of the matrices. The use of the explicit form of the Richardson extrapolation in the present work overrules the need to employ the Richardson iteration with a Leja ordering. The performance of these methods was compared with the GMRES algorithm for three representative problems: two-dimensional boundary value problem using the Laplace equation, three-dimensional multi-grid, potential solution over a sphere and the one-dimensional steady state Burger equation. In all three examples both methods have the same rate of convergence, or better, as that of the GMRES method in terms of computer operational count. However, in terms of storage requirements, the method based upon vector scaling has a significant advantage over the normal equations approach as well as the GMRES method, in which only one vector of the N grid-points is required. Copyright © 2001 John Wiley & Sons, Ltd.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call