Abstract

Sparse matrix-vector multiplication dominates the performance of many scientific and industrial problems. For example, iterative methods for solving linear systems rely on the performance of this critical operation. The particular case of binary matrices shows up in many important areas of computing, such as graph theory and cryptography. Unfortunately, irregular memory access patterns cause poor memory throughput, slowing down this operation. To maximize memory throughput, we transform the matrix into a straight-line program that takes full advantage of the instruction cache. The regular loopless pattern of the program minimizes cache misses, thus decreasing the latency for most instructions. We focus on the widely used x86_64 architecture and on binary matrices, to explore several possible tradeoffs regarding memory access policies and code size. When compared to a Compressed Row Storage (CRS) implementation, we obtain significant speedups of up to 4x.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call