Abstract

This chapter presents the basic iterative methods, Jacobi, Gauss-Seidel, and SOR that serve as models for more advanced methods. The Jacobi iteration uses the previous values of the iteration to advance, but Gauss-Seidel uses new component values as soon as they are computed. As such, it is generally more accurate. SOR (successive overrelaxation) computes a weighted average of the Gauss-Seidel components with the previous ones. The relaxation parameter, ω, must be in the range 0 < ω < 2. Convergence of these methods depends on the iteration matrix, a matrix such that xk = Bxk − 1 + c. If the norm of B for some subordinate norm is less than 1, the iteration converges. The iteration converges if and only if the spectral radius of B is less than 1. The spectral radius can be used as an approximation to how fast the iteration will converge; the smaller it is the faster the iteration converges. The Jacobi and Gauss-Seidel methods converge if A is strictly diagonally dominant, and the Gauss-Seidel iteration converges if B is positive definite. Convergence of the SOR iteration is guaranteed if 0 < ω < 2 and A is positive definite. If convergence is not guaranteed, it is possible for the one iteration to succeed and another fail. Choosing ω for SOR is difficult and a precise value is known for only a few matrices. Although costly, it is possible to estimate an optimal value by running a numerical experiment. The Poisson equation is very important in many fields of science and engineering. The chapter presents a five-point central difference approximation for the equation and uses the SOR iteration to develop an approximation to the solution of Poisson’s equation with boundary conditions of zero.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call