Abstract

Hybrid MPI–OpenMP applications employ message-passing interface (MPI)-enabled process-level, distributed computations on many compute nodes in conjunction with OpenMP shared-memory, thread-level parallelism for the most efficient computation. This poses challenges on the dynamic MPI correctness tool MUST and TypeART, its memory allocation tracking sanitizer extension based on the LLVM compiler framework. In particular, at the thread-level granularity of a process, MPI calls and memory allocations, which are both tracked for our analysis, can occur concurrently. To correctly handle this situation, we: 1) extended our compiler extension to handle OpenMP and 2) introduced thread-safety mechanisms to our runtime libraries, thus keeping the tracking of data consistent and avoiding data races. Our approach exhibits acceptable runtime and memory overheads, both typically below 30%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call