Abstract
Assume that the coefficient matrix A in the system $Ax = b$ is large and sparse. Consider the following two algorithms: DS (where the system is solved by a direct use of Gaussian elimination) and IR (where the use of Gaussian elimination is combined with the use of a large drop tolerance and followed by iterative refinement). Assume that some sparse technique is implemented with both DS and IR. The performance of two codes, the NAG subroutines (which are based on DS) and the RECKU subroutines (which are based on IR) are compared on a wide set of test matrices. The comparison shows that the second algorithm, IR, performs better in general. The computing time and/or the storage needed may be reduced considerably when the IR algorithm is used. Moreover, this algorithm normally provides a reliable estimate of the accuracy of the computed solution. When the problems are time and storage consuming, IR is much better (it gives a reduction in the computing time of up to 10 times and a reduction in the storage of up to 2–3 times). It is shown that IR is very efficient when linear least-squares problems are solved by the use of augmented matrices.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: SIAM Journal on Scientific and Statistical Computing
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.