GPU performance of the lattice Boltzmann method (LBM) depends heavily on memory access patterns. When implemented with GPUs on complex domains, typically, geometric data is accessed indirectly and lattice data is accessed lexicographically. Although there are a variety of other options, no study has examined the relative efficacy between them. Here, we examine a suite of memory access schemes via empirical testing and performance modeling. We find strong evidence that semi-direct is often better suited than the more common indirect addressing, providing increased computational speed and reducing memory consumption. For the layout, we find that the Collected Structure of Arrays (CSoA) and bundling layouts outperform the common Structure of Array layout; on V100 and P100 devices, CSoA consistently outperforms bundling, however the relationship is more complicated on K40 devices. When compared to state-of-the-art practices, our recommendations lead to speedups of 10-40 percent and reduce memory consumption up to 17 percent. Using performance modeling and computational experimentation, we determine the mechanisms behind the accelerations. We demonstrate that our results hold across multiple GPUs on two leadership class systems, and present the first near-optimal strong results for LBM with arterial geometries run on GPUs.
Read full abstract