Abstract

SummaryThe sparse matrix multiplication (SpGeMM) increased its importance in the last years due to its data science and machine learning applications. Consequently, considerable research has focused on accelerating this kernel in GPUs. Designing massively‐parallel algorithms for the SpGeMM is a challenging task since the computation pattern is highly irregular, and the required memory and operations depend on the interaction between the nonzero layout of the inputs. One strategy to attack this kernel consists of proposing new sparse matrix storage formats that contribute to mitigating this irregularity. In previous work, we commenced a study of the recently proposed bmSparse matrix format, suggesting several modifications to the SpGeMM algorithm. This work integrates the previous extensions and proposes new improvements to unleash bmSparse's full potential before comparing it to more consolidated options. In particular, we enhance one of the most computationally demanding stages with an adaptive technique, apply optimizations to achieve more efficient data accesses, and analyze the effect of using Tensor Cores to accelerate the multiplication stage of the algorithm. The experimental results on a set of real‐world sparse matrices show that the optimized implementation largely outperforms vendor implementations such as NVIDIA cuSparse Intel MKL‐CSR variant, while being competitive with MKL's‐BSR.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call