Abstract

We present fast and scalable parallel computations for a number of important and fundamental matrix problems on distributed memory systems (DMS). These problems include computing the powers, the inverse, the characteristic polynomial, the determinant, the rank, the Krylov matrix, and an LU- and a QR-factorization of a matrix, and solving linear systems of equations. These parallel computations are based on efficient implementations of the fastest sequential matrix multiplication algorithm on DMS. We show that compared with the best known time complexities on PRAM, our parallel matrix computations achieve the same speeds on distributed memory parallel computers (DMPC), and have an extra polylog factor in the time complexities on DMS with hypercubic networks. Furthermore, our parallel matrix computations are fully scalable on DMPC and highly scalable over a wide range of system size on DMS with hypercubic networks. Such fast and scalable parallel matrix computations were not seen before on any distributed memory systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call