Abstract

We present an implementation and scaling analysis of a GPU-accelerated kernel for HemeLB, a high-performance Lattice Boltzmann code for sparse complex geometries. We describe the structure of the GPU implementation and we study the scalability of HemeLB on a GPU cluster under normal operating conditions and with real-world application cases. We investigate the effect of CUDA block size and GPU over-subscription on the single-GPU performance, and we present a strong-scaling analysis of multi-GPU parallel simulations using two different hardware models (P100 and V100) and a variety of large cerebral aneurysm geometries. We find that HemeLB-GPU achieves single-GPU speedups of $50\times$ (P100) and $100\times$ (V100) compared to a single CPU core, with good scalability up to 32 GPUs. We also discuss strategies to improve both the kernel performance as well as the scalability of HemeLB-GPU to a larger number of GPUs. The GPU implementation supports the LBGK collision kernel, boundary conditions for walls and inlets/outlets, and several lattice types (D3Q15, D3Q19, D3Q27), and it integrates seamlessly with the existing infrastructure in HemeLB for graph partitioning and parallelization via MPI. It is expected that the GPU implementation will enable users of the HemeLB code to make better utilization of heterogeneous high-performance computing systems for large-scale lattice Boltzmann simulations.

Highlights

  • The Lattice Boltzmann Method (LBM) is a highly versatile computational fluid dynamics solver, for applications that target flow modeling in sparse complex geometries [1], [2], and it has substantial engineering importance due to its high parallel computing performance [3]–[5]

  • We describe our multi-GPU implementation of the lattice Boltzmann method, which is based on the high-performance LBM flow solver HemeLB [6]–[8], one of the key use cases of the MPI-4 standard [9]

  • The block size appears to have very little impact on performance. From these results, combined with the insights obtained from NVIDIA Nsight Compute and the CUDA Occupancy Calculator, we conclude that the LBM kernel is still limited by register usage rather than compute or memory bandwidth

Read more

Summary

Introduction

The Lattice Boltzmann Method (LBM) is a highly versatile computational fluid dynamics solver, for applications that target flow modeling in sparse complex geometries [1], [2], and it has substantial engineering importance due to its high parallel computing performance [3]–[5]. As a result, developing a high-performance LB framework based on a hybrid CPU-GPU parallelization scheme will enable researchers to study more complex fluid flow problems in a timely manner. We describe our multi-GPU implementation of the lattice Boltzmann method, which is based on the high-performance LBM flow solver HemeLB [6]–[8], one of the key use cases of the MPI-4 standard [9]. Et al.: GPU Acceleration of HemeLB Code for Lattice Boltzmann Simulations in Sparse Complex Geometries form [10], [11] fi(x + hci, t + h) = fi(x, t) − ij fj(x, t) − fjeq (ρ, u) , (1)

Objectives
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call