Abstract
Recently, MPI has become widely used in many scientific applications, including different non-computer science fields, for parallelizing their applications. An MPI programming model is used for supporting parallelism in several programming languages, including C, C++, and Fortran. MPI also supports integration with some programming models and has several implementations from different vendors, including open-source and commercial implementations. However, testing parallel programs is a difficult task, especially when using programming models with different behaviours and types of error based on the programming model type. In addition, the increased use of these programming models by non-computer science specialists can cause several errors due to lack of experience in programming, which needs to be considered when using any testing tools. We noticed that dynamic testing techniques have been used for testing the majority of MPI programs. The dynamic testing techniques detect errors by analyzing the source code during runtime, which will cause overheads, and this will affect the program's performance, especially when targeting massive parallel applications generating thousands or millions of threads. In this paper, we enhance ACC_TEST to have the ability to test MPI-based programs and detect runtime errors occurring with different types of MPI communications. We decided to use hybrid-testing techniques by combining both static and dynamic testing techniques to gain the benefit of each and reduce the cost.
Highlights
Message-Passing Interface (MPI) is one of the most widely used programming models for parallelizing most scientific applications
We focus on the interaction between MPI and the other programming models, especially high-level programming models such as OpenACC
There are many testing tools related to MPI; we only focus on detecting some errors that occur in MPI programs, which include GPU-related programming models
Summary
Message-Passing Interface (MPI) is one of the most widely used programming models for parallelizing most scientific applications. Our hybrid testing technique will examine the targeted source code by analyzing the code, collecting the related information to MPI_Send, MPI_Recv, and MPI_Sendrecv for detecting any actual or potential errors. Our static analysis will detect any wildcard receive with any source or any tag and examine them to avoid any potential deadlock or race condition and annotate them to be detected in our dynamic phase Another case of point-to-point blocking communication is the MPI_Sendrecv calls, which will be examined and analyzed like the previous MPI_Send and MPI_Recv calls. For testing race condition in point-to-point blocking communication in (MPI_Sendrecv), ACC_TEST will use the same insertion mechanism of the previous test as shown in Figure 4 by comparing the actual message exchange information with the information from our static analysis. Our dynamic testing will use MPI_Test for each MPI broadcast call and extract the actual information, including the broadcast’s source
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.