Abstract

Nowadays, containerization techniques for High Performance Computing (HPC) are becoming more prominent with the increase in the features and complexity of scientific applications. Message Passing Interface (MPI) applications have many dependencies in terms of other scientific libraries, new patches, new code versions, and bug fixes to effectively execute scientific workloads. The tasks of updating the version of the code, installing new patches, and troubleshooting build issues are very time-consuming and complicated. These problems are tackled by making MPI applications portable through the use of the containerization concept, which hides system-level dependencies and complexities. This work presents the challenges and approaches for building and executing containerized MPI applications on InfiniBand based HPC. The comprehensive performance evaluation of a containerized MPI application converted from Docker to Singularity is performed on HPC up to 880 MPI ranks using InfiniBand in reference to bare metal. In addition, profiling of MPI applications is done by running them in an isolated container environment on InfiniBand based HPC with 640 MPI ranks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call