Abstract

Sparse matrix-vector multiplication (SpMV) is a key linear algebra algorithm and is widely used in many application domains. Besides multi-core architecture, there is also extensive research focusing on accelerating SpMV on many-core Graphics Processing Units (GPUs). SpMV computations have many indirect and irregular memory accesses, and load imbalance could occur while mapping computations onto single-instruction, multiple-data (SIMD) GPUs. SpMV is highly memory bandwidth-bound, though GPUs have massive computational resources, the performance of SpMV on GPUs is still unsatisfying. In this paper, we present a new hybrid strategy for auto-tuning SpMV on GPUs. Our strategy combines the advantages of row-major storage and column-major storage. Like many other strategies, we reordered a given sparse matrix according to row lengths in decreasing order. In order to be more adaptive and efficient, we proposed a new hybrid Blocked CSR and JDS (BCJ) format based on original CSR and JDS. BCJ splits a sparse matrix into a denser part and a sparser part after reordering and uses different kernels to process the corresponding part. And we proposed corresponding auto-tuning framework to help transforming matrix and launching kernels according to the sparsity characteristics of the matrix. A CUDA implementation of BCJ outperforms the original formats significantly on a broad range of unstructured sparse matrices.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.