Abstract

Many engineering and science problems require a computational effort to solve large sparse linear systems. Krylov subspace based iterative solvers have been widely used in that direction. Iterative Krylov methods involve linear algebra operations such as summation of vectors, dot product, norm, and matrix-vector multiplication. Since these operations could be very costly in computation time on Central Processing Unit (CPU), we propose in this paper to focus on the design of iterative solvers to take advantage of massive parallelism of Graphics Processing Unit (GPU). We consider Stabilized BiConjugate Gradient (BiCGStab), Stabilized BiConjugate Gradient (L) (BiCGStabl), Generalized Conjugate Residual (P-GCR), Bi-Conjugate Gradient Conjugate Residual (P-BiCGCR), transpose-free Quasi Minimal Residual (P-tfQMR) for the solution of sparse linear systems with non symmetric matrices and Conjugate Gradient (CG) for symmetric positive definite matrices. We discuss data format and data structure for sparse matrices, and how to efficiently implement these solvers on the Nvidia's CUDA platform. The scalability and performance of the methods are tested on several engineering problems, together with numerous numerical experiments which clearly illustrate the robustness, competitiveness and efficiency of our own proper implementation compared to the existing libraries.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call