Abstract
The performance of MPI implementation operations still presents critical issues for high performance computing systems, particularly for more advanced processor technology. Consequently, this study concentrates on benchmarking MPI implementation on multi-core architecture by measuring the performance of Open MPI collective communication on Intel Xeon dual quad-core Gigabit Ethernet and InfiniBand clusters using SKaMPI. It focuses on well known collective communication routines such as MPI-Bcast, MPI-AlltoAll, MPI-Scatter and MPI-Gather. From the collection of results, MPI collective communication on InfiniBand clusters had distinctly better performance in terms of latency and throughput. The analysis indicates that the algorithm used for collective communication performed very well for all message sizes except for MPI-Bcast and MPI-Alltoall operation of inter-node communication. However, InfiniBand provides the lowest latency for all operations since it provides applications with an easy to use messaging service, compared to Gigabit Ethernet, which still requests the operating system for access to one of the server communication resources with the complex dance between an application and a network.
Highlights
The emerging trend of using cluster as High Performance Computing (HPC) has led to much research in this field, the standard approach utilized for communication between nodes; Message Passing Interface (MPI) (Isaila et al, 2010; Balaji et al, 2009)
The results show that the InfiniBand has the lowest latency, approximately 24.2% compared to the Gigabit Ethernet for both cores
This happened since the multi path of high speed InfiniBand allows transmission of data to be completed faster than the Gigabit Ethernet
Summary
The emerging trend of using cluster as High Performance Computing (HPC) has led to much research in this field, the standard approach utilized for communication between nodes; Message Passing Interface (MPI) (Isaila et al, 2010; Balaji et al, 2009). MPI is a library of routines provides a portable programming paradigm for existing development environments with a depends critically on the performance of the communication routines provided by the MPI library. InfiniBand is able to provide lower latency and higher bandwidth than Gigabit Ethernet It has latency range between 110 μs and can support network bandwidth up to 10 Gbit/sec (or 10000 MBytes/sec). The outcome would be beneficial for further research related to the Open MPI implementation on multi-core clusters
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.