Abstract
Sparse matrix–vector multiplication (SpMV) appears in many application domains, and performance is the key consideration when implementing SpMV kernels. At the same time, accuracy is also important because rounding errors can drastically change the computed result. Multiple-precision arithmetic is a common approach to improve the accuracy of results. In this paper, we implement and evaluate multiple-precision SpMV kernels in the CSR, JAD, ELLPACK, and DIA matrix storage formats for graphics processing units. In the proposed implementation, the matrix is represented in double precision, while the input and output vectors are in multiple precision and internal computations are also performed in multiple precision. Our underlying floating-point arithmetic algorithms are based on the residue number system, which is attractive because of its carry-free nature and provides an arbitrary level of precision determined by the set of moduli that comprise the base of the system. In particular, we apply sets from 8 to 64 moduli, reaching precision values from 106 to 848 bits, and demonstrate how higher precision reduces the rounding errors in the SpMV kernel. We also conduct a thorough analysis of the CSR kernel in terms of roofline performance, occupancy and memory bandwidth, and evaluate the performance and memory consumption for all the proposed kernels. Numerical experiments on matrices from real-world applications show that ELLPACK offers the highest performance in many examples, while JAD is a good trade-off between performance and space requirements. Furthermore, our proposed kernels in general substantially improve the efficiency of sparse matrix–vector multiplication compared to implementations built on top of existing multiple-precision CUDA libraries. Finally, we integrate the multiple-precision SpMV into a preconditioned conjugate gradient linear solver and identify test cases where our implementation exhibits superior convergence and numerical robustness.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.