Abstract

Sparse linear algebra includes the fundamental and important operations in various large-scale scientific computing and real-world applications. There exists performance bottleneck for sparse linear algebra since it mainly contains the memory-bound computations with low arithmetic intensity. How to improve its performance has increasingly become a focus of research efforts. Using parallel computing techniques to accelerate sparse linear algebra is currently the most popular method, while facing various challenges, e.g., large-scale data brings difficulties in storage, and the sparsity of data leads to irregular memory accesses and parallel load imbalance. Therefore, this article provides a comprehensive overview on acceleration of sparse linear algebra operations using parallel computing platforms, where we focus on four main classifications: sparse matrix-vector multiplication (SpMV), sparse matrix-sparse vector multiplication (SpMSpV), sparse general matrix-matrix multiplication (SpGEMM), and sparse tensor algebra. The takeaways from this article include the following: understanding the challenges of accelerating linear sparse algebra on various hardware platforms; understanding how structured data sparsity can improve storage efficiency; understanding how to optimize parallel load balance; understanding how to improve the efficiency of memory accesses; understanding how do the adaptive frameworks automatically select the optimal algorithms; and understanding recent design trends for acceleration of parallel sparse linear algebra.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call