The Message Passing Interface Standard (MPI) is a message passing library standard based on the consensus of the MPI Forum, which has over 40 participating organizations, including vendors, researchers, software library developers, and users. The goal of the Message Passing Interface is to establish a portable, efficient, and flexible standard for message passing that will be widely used for writing message passing programs. As such, MPI is the first standardized, vendor independent, message-passing library. The advantages of developing message passing software using MPI closely match the design goals of portability, efficiency, and flexibility. MPI is not an IEEE or ISO standard, but has, in fact, become the "industry standard" for writing message passing programs on HPC platforms. As parallel systems are commonly being built out of increasingly large multicore chips, Application programmers are exploring the use of hybrid programming models combining MPI across nodes and multithreading within a node. Many MPI implementations, however, are just starting to support, multithreaded MPI communication, often focusing on correctness First and performance later. The MPI implementation defines functions that can be used for initializing the thread environment. It is not required that all MPI implementations fulfill all the requirements which are All MPI calls are thread-safe and Blocking MPI calls. MPI process is a process that may be multi-threaded. Each thread can issue MPI calls. The MPI Standard, however, requires only that no MPI call in one thread block MPI calls in other threads; it makes no performance guarantees. In this paper propose a test suite to measure the performance. The test has seven benchmarks which are overhead of MPI_thread_multiple level for thread safety, concurrent bandwidth, concurrent latency, concurrent short-long messages, communication/computation overlap, concurrent collective and concurrent collective and computation.
Read full abstract