Abstract

AMG is one of the most efficient and widely used methods for solving sparse linear systems. The computational process of AMG mainly consists of a series of iterative calculations of generalized sparse matrix-matrix multiplication (SpGEMM) and sparse matrix-vector multiplication (SpMV). Optimizing these sparse matrix calculations is crucial for accelerating solving linear systems. In this paper, we first focus on optimizing the SpGEMM algorithm in AmgX, a popular AMG library for GPUs. We propose a new algorithm called SpGEMM-upper, which achieves an average speedup of 2.02x on Tesla V100 and 1.96x on RTX 3090 against the original algorithm. Next, through experimental investigation, we conclude that no single SpGEMM library or algorithm performs optimally for most sparse matrices, and the same holds true for SpMV. Therefore, we build machine learning-based models to predict the optimal SpGEMM and SpMV used in the AMG calculation process. Finally, we integrate the prediction models, SpGEMM-upper, and other selected algorithms into a framework for adaptive sparse matrix computation in AMG. Our experimental results prove that the framework achieves promising performance improvements on the test set.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call