Abstract

SummaryThe solution of sparse triangular linear systems is one of the most important building blocks for a large number of science and engineering problems. For these reasons, it has been studied steadily for several decades, principally in order to take advantage of emerging parallel platforms. In the context of massively parallel platforms such as GPUs, the standard strategy of parallel solution is based on performing a level‐set analysis of the sparse matrix, and the kernel included in the nVidia cuSparse library is the most prominent example of this approach. However, a weak spot of this implementation is the costly analysis phase and the constant synchronizations with the CPU during the solution stage. In previous work, we presented a self‐scheduled and synchronization‐free GPU algorithm that avoided the analysis phase and the synchronizations of the standard approach. Here, we extend this proposal and show how the level‐set information can be leveraged to improve its performance. In particular, we present new GPU solution routines that attack some of the weak spots of the self‐scheduled solver, such as the under‐utilization of the GPU resources in the case of highly sparse matrices. The experimental evaluation reveals a sensible runtime reduction over cuSparse and the state‐of‐the‐art synchronization‐free method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call