Abstract
Conventional Graphics Processing Unit (GPU) implementations of Strassen’s algorithm (S trassen ) rely on the existing high-performance matrix multiplication ( gemm ), trading space for time. As a result, such approaches can only achieve practical speedup for relatively large, “squarish” matrices due to the extra memory overhead, and their usages are limited due to the considerable workspace. We present novel S trassen primitives for GPUs that can be composed to generate a family of S trassen algorithms. Our algorithms utilize both the memory and thread hierarchies on GPUs, reusing shared memory and register files inherited from gemm , fusing additional operations, and avoiding extra workspace. We further exploit intra- and inter-kernel parallelism by batching, streaming, and employing atomic operations. We develop a performance model for NVIDIA Volta GPUs to select the appropriate blocking parameters and predict the performance for gemm and S trassen . Overall, our 1-level S trassen can achieve up to 1.11× speedup with a crossover point as small as 1,536 compared to cublasSgemm on a NVIDIA Tesla V100 GPU. With additional workspace, our 2-level S trassen can achieve 1.19× speedup with a crossover point at 7,680.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have