Abstract
When performing the Cholesky factorization of a sparse matrix on a distributed-memory multiprocessor, the methods used for mapping the elements of the matrix and the operations constituting the factorization to the processors can have a significant impact on the communication overhead incurred. This paper explores how two techniques, one used when mapping dense Cholesky factorization and the other used when mapping sparse Cholesky factorization, can be integrated to achieve a communication-efficient parallel sparse Cholesky factorization. Two localizing techniques to further reduce the communication overhead are also described. The mapping strategies proposed here, as well as other previously proposed strategies fit into the unifying framework developed in this paper. Communication statistics for sample sparse matrices are included. >
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.