Abstract
AbstractWe describe a second‐order accurate approach to sparsifying the off‐diagonal matrix blocks in the hierarchical approximate factorization methods for solving sparse linear systems. These methods repeatedly sparsify the fill‐in matrix blocks that arise in block Gaussian elimination, to compute an approximate factorization of the given matrix, assuming that the fill‐in blocks are low‐rank. The factorization is then used for preconditioning in a Krylov subspace method, such as the conjugate gradient method (CG), BiCGSTAB or GMRES. However, to achieve fast convergence on ill‐conditioned systems, sparsifications can only introduce a small error, in which case they may inefficiently restore sparsity, and consequently the factorization can take a lot of time to compute. In the novel approach to sparsification, the 2‐norm of the incurred error in the sparsification of a matrix block is squared compared to previous approaches, with no additional computations. We incorporate the new approach into the recent sparsified nested dissection algorithm and test it on a wide range of symmetric positive definite problems. The new approach halves the number of CG iterations needed for convergence, significantly improving the overall performance of the algorithm. Our approach can be incorporated into other solvers that exploit the low‐rank property of matrix blocks.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal for Numerical Methods in Engineering
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.