Abstract

A broad range of scientific simulations involve solving large-scale computationally expensive linear systems of equations. Iterative solvers are typically preferred over direct methods when it comes to large systems due to their lower memory requirements and shorter execution times. However, selecting the appropriate iterative solver is problem-specific and dependent on the type and symmetry of the coefficient matrix. Gauss-Seidel (GS) is an iterative method for solving linear systems that are either strictly diagonally dominant or symmetric positive definite. This technique is an improved version of Jacobi and typically converges in fewer iterations. However, the sequential nature of this algorithm complicates the parallel extraction. In fact, most parallel derivatives of GS rely on the sparsity pattern of the coefficient matrix and require matrix reordering or domain decomposition. In this article, we introduce a new algorithm that exploits the convergence property of GS and adapts the parallel structure of Jacobi. The proposed method works for both dense and sparse systems and is straightforward to implement. We have examined the performance of our method on multicore and many-core architectures. Experimental results demonstrate the superior performance of the proposed algorithm compared with GS and Jacobi. Additionally, performance comparison with built-in Krylov solvers in MATLAB showed that in terms of time per iteration, Krylov methods perform faster on CPUs, but our approach is significantly better when executed on GPUs. Lastly, we apply our method to solve the power flow problem, and the results indicate a significant improvement in runtime, reaching up to 87 times faster speed compared with GS.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call