Abstract

The cloud brings new possibilities to run traditional HPC applications, giving its flexibility and reduced cost. However, running MPI applications in the cloud can reduce appreciably its performance, because the cloud hides its internal network topology information, and existing topology-aware techniques to optimize MPI communications cannot be directly applied to virtualized infrastructures. In this paper it is presented the MPI-Performance-Aware-Reallocation method (MPAR), a general approach to improve MPI communications. This new approach: (i) is not linked to any specific software or hardware infrastructure, (ii) is applicable to cloud, (iii) abstracts the network topology performing experimental tests, and (iv) is able to improve the performance of the MPI users application via the reallocation of the involved MPI processes. The MPAR has been demonstrated for cloud infrastructures, via the implementation of the Latency-Aware-MPI-Cloud-Scheduler (LAMPICS) layer. LAMPICS is able to improve the latency of MPI communications in clouds, without the need of creating ad-hoc MPI implementations or modifying the source code of user’s MPI applications. We have tested LAMPICS with the Sendrecv micro benchmark provided by the Intel MPI Benchmarks, with performance improvements of up to 70%, and with two real-world applications from the Unified European Applications Benchmark Suite, obtaining performance improvements of up to 26.5%.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.