Abstract

During the last decades, the continuous expansion of supercomputing infrastructures necessitates the design of scalable and robust parallel numerical methods for solving large sparse linear systems. A new approach for the additive projection parallel preconditioned iterative method based on semiaggregation and a subspace compression technique, for general sparse linear systems, is presented. The subspace compression technique utilizes a subdomain adjacency matrix and breadth first search to discover and aggregate subdomains to limit the average size of the local linear systems, resulting in reduced memory requirements. The depth of aggregation is controlled by a user defined parameter. The local coefficient matrices use the aggregates computed during the formation of the subdomain adjacency matrix in order to avoid recomputation and improve performance. Moreover, the rows and columns corresponding to the newly formed aggregates are ordered last to further reduce fill-in during the factorization of the local coefficient matrices. Furthermore, the method is based on nonoverlapping domain decomposition in conjunction with algebraic graph partitioning techniques for separating the subdomains. Finally, the applicability and implementation issues are discussed and numerical results along with comparative results are presented.

Highlights

  • Let us consider a sparse linear system of the following form: Ax = b, (1)where A is the coefficient matrix, b is the right-hand-side vector, and x is the solution vector

  • The numerical experiments were performed on a BlueGene/P (BG/P) supercomputer with the following specifications: CPU: 1024x Quad-Core

  • The parallelization in each multicore node is achieved with OpenMP, so that every subdomain is mapped to one core

Read more

Summary

Introduction

Let us consider a sparse linear system of the following form: Ax = b, (1). Where A is the coefficient matrix, b is the right-hand-side vector, and x is the solution vector. In order to solve the linear system, in (1), a direct or an iterative method can be used. A direct method is computationally expensive with excessive memory requirements. Compare [1], have gained the attention of the scientific community during the recent decades, due to their efficiency and memory requirements for solving large sparse linear systems. Preconditioned iterative methods improve the convergence rate and have been used extensively for solving large sparse linear systems. The left preconditioned form of the linear system, in (1), is as follows: M−1Ax = M−1b,

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call