Abstract

The lightweight nature, application portability, and deployment flexibility of containers is driving their widespread adoption in cloud solutions. Data analysis and deep learning (DL)/machine learning (ML) applications have especially benefited from containerization. As such data analysis is adopted in high performance computing (HPC), the need for container support in HPC has become paramount. However, containers face crucial performance and I/O challenges in HPC. One obstacle is that while there have been HPC containers, such solutions have not been thoroughly investigated, especially from the aspect of their impact on the crucial HPC I/O throughput. To this end, this paper provides a first-of-its-kind empirical analysis of state-of-the-art representative container solutions (Docker, Podman, Singularity, and Charliecloud) in HPC environments. We also explore how containers interact with an HPC parallel file system like Lustre. We present the design of an analysis framework that is deployed on all nodes in an HPC environment, and captures CPU, memory, network, and file I/O statistics from the nodes and the storage system. We are able to garner key insights from our analysis, e.g., Charliecloud outperforms other container solutions in terms of container start-up time, while Singularity and Charliecloud are equivalent in I/O throughput. But this comes at a cost, as Charliecloud invokes the most metadata and I/O operations on the underlying Lustre file system. By identifying such trade-offs and optimization opportunities, we can enhance HPC containers performance and the ML/DL applications that increasingly rely on them.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call