Abstract

Sparse matrix-vector multiplication (SpMV) is a critical operation and domains computing cost in a wide variety of real-world scientific and engineering applications. While many sparse storage formats and their computing kernels have been developed in recent years, CSR (Compressed Sparse Row) is still the most popular and widely used sparse storage format and CSR-Based SpMV usually has better performance for sparse matrices with large number of nonzero elements. This paper presents a performance prediction model built by using machine learning approach to accurately predict the execution time of GPU-accelerated SpMV using CSR kernel. The prediction accuracy of our proposed model is evaluated on a collection of fourteen sparse matrices. The results of our experiments performed on two different NVIDIA GPUs demonstrate the effectiveness of our proposed approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call