Abstract
Binary Sparse Matrix-Vector Multiplication (SpMV) is a heavy computational kernel in weblink analysis, integer factorization, compressed sensing, spectral graph theory, and other domains. Testing several popular GPU-based SpMV implementations on 400 sparse matrices, we observed that data transfer to GPU memory accounts for a large part of the total computation time. The transfer of constant value “1”s can be easily eliminated for binary sparse matrices. However, compressing index arrays has always been a great challenge. This article proposes a new compression format TaiChi to further reduce index data copies and improve the performance of SpMV, especially for diagonally dominant binary sparse matrices. Input matrices are first partitioned into relatively dense and ultra-sparse areas. Then the dense areas are encoded inversely by marking “0”s, while the ultra-sparse area is encoded by marking “1”s. We also designed a new SpMV algorithm only using addition and subtraction for binary matrices based on our partition and encoding format. Evaluation results on real-world binary sparse matrices show that our hybrid encoding for binary matrix significantly reduces the data transfer and speeds up the kernel execution. It achieves the highest transfer and kernel execution speedups of 5.63x and 3.84x on GTX 1080 Ti, 3.39x and 3.91x on Tesla V100.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Parallel and Distributed Systems
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.