Abstract
While the proper orthogonal decomposition (POD) is optimal under certain norms, its also expensive to compute. For large matrix sizes, the QR decomposition provides a tractable alternative. Under the assumption that it is a rank-revealing QR (RRQR), the approximation error incurred is similar to the POD error; furthermore, the authors show the existence of an RRQR with exactly the same error estimate as POD. To numerically realize an RRQR decomposition, they discuss the (iterative) modified Gram Schmidt with pivoting and reduced basis methods. They show that these two seemingly different approaches are equivalent. They then describe an MPI/OpenMP parallel code that implements one of the QR-based model reduction algorithms analyzed, and document the codes scalability for large problems, such as gravitational waves, and demonstrate excellent scalability up to 32,768 cores and, for complex dense matrices, as large as 10,000x3,276,800.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.