Abstract
In this work, we develop a fast hierarchical solver for solving large, sparse least squares problems. We build upon the algorithm, spaQR (sparsified QR Gnanasekaran and Darve in SIAM J Matrix Anal Appl 43(1):94–123, 2022), that was developed by the authors to solve large sparse linear systems. Our algorithm is built on top of a Nested Dissection based multifrontal QR approach. We use low-rank approximations on the frontal matrices to sparsify the vertex separators at every level in the elimination tree. Using a two-step sparsification scheme, we reduce the number of columns and maintain the ratio of rows to columns in each front without introducing any additional fill-in. With this improvised scheme, we show that the runtime of the algorithm scales as \(\mathcal {O}(M \log N)\) and uses \(\mathcal {O}(M)\) memory to store the factorization. This is achieved at the expense of a small and controllable approximation error. The end result is an approximate factorization of the matrix stored as a sequence of sparse orthogonal and upper-triangular factors and hence easy to apply/solve with a vector. Numerical experiments compare the performance of the spaQR algorithm with direct multifrontal QR, an inner-outer iterative method, and CGLS iterative method with a diagonal preconditioner and Robust Incomplete Factorization (RIF) preconditioner.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.