Abstract

MPI (message passing interface) is the inter-node communication interface used in today PC clusters and other cluster-type parallel/distributed systems. Up-to-date the most popular analytical MPI performance model for parallel/distributed machines is the LogGP model, which is based on system hardware parameters. Recently, the improvements in connection network and communication protocols have lead to the dramatically changes in the contributions of hardware parameters to the model communication time. So LogGP model needs to be re-evaluated for its format and hardware parameters, to include the effect of message structures, which is in form of communication middleware overheads. In this paper, we use our experiment results to show that the current LogGP communication model is too limited for today parallel/distributed systems. We propose a modification by including into the model additional system parameters to represent middleware costs on non-contiguous data. We show in this paper our theoretical and experimental results for point-to-point communication, and explain how performance models for other communication patterns can be similarly created

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.