Programming models that use a higher level of abstraction to express parallelism can target both CPUs and any attached devices, alleviating the maintainability and portability concerns facing today's heterogenous systems. This article describes the design, implementation, and delivery of a compliant OpenMP device offloading implementation for IBM-NVIDIA heterogeneous servers composing the Summit and Sierra supercomputers in the mainline open-source Clang/LLVM compiler and OpenMP runtime projects. From a performance perspective, reconciling the GPU programming model, best suited for massively parallel workloads, with the generality of the OpenMP model was a significant challenge. To achieve both high performance and full portability, we map high-level programming patterns to fine-tuned code generation schemes and customized runtimes that preserve the OpenMP semantics. In the compiler, we implement a low-overhead single-program multiple-data scheme that leverages the GPU native execution model and a fallback scheme to support the generality of OpenMP. Modular design enables the implementation to be extended with new schemes for frequently occurring patterns. Our implementation relies on key optimizations: sharing data among threads, leveraging unified memory, aggressive inlining of runtime calls, memory coalescing, and runtime simplification. We show that for commonly used patterns, performance on the Summit and Sierra GPUs matches that of hand-written native CUDA code.
Read full abstract