Abstract

Message Passing Interface (MPI) is a dominant parallel programming paradigm. MPI processes communicate with each other by sending or receiving messages through communication functions. The application's communication latency will be less if processes are scheduled on nearest cores or nodes and communication latency will be more if processes are scheduled on farthest cores or nodes.The communication latency can be reduced by using topology-aware process placement technique. In this technique, MPI processes are placed on the nearest cores if they have more communication between them. To find the communication pattern between processes, analysis of MPI program is required. Various techniques like static, symbolic and dynamic analysis are available for finding communication pattern of MPI program. These techniques are either taking more time for analysis or fail to find correct communication pattern. In this paper, we have proposed DAPE (Dynamic Analysis with Partial Execution) technique for analysis of MPI program, which finds correct communication pattern in less time as compared to existing techniques. The experimental results show that the proposed technique outperforms over the existing techniques.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.