Abstract
Comparison between OpenMP for thread programming model and MPI for message passing programming model will be conducted on multicore shared memory machine architectures in order to find which has a better performance in terms of speed and throughput. Application used to assess the scalability of the evaluated parallel programming solutions is matrix multiplication with customizable matrix dimension. Many research done on a large scale parallel computing which using high scale benchmark such as NSA Parallel Benchmark (NPB) for their testing standardization [2]. This research will be conducted on a small scale parallel computing that emphasize more on the performance evaluation between MPI and OpenMP parallel programming model using self created benchmark. It also describes how workshare processes done on different parallel programming model. It gives comparative result between message passing and shared memory programming model in runtime and amount of throughput. Testing methodology also simple and has high usability on the available resources.
Highlights
The growth of multicore processors has increased the need for parallel programs on the largest to the smallest of systems.There are many ways to express parallelism in a program
Message Passing Interface (MPI) performance for shared memory systems will be tested on cluster of shared memory machines
OpenMP will be used as a reference on the same multicore systems with MPI clusters[2]
Summary
The growth of multicore processors has increased the need for parallel programs on the largest to the smallest of systems (clusters to laptops).There are many ways to express parallelism in a program. In HPC (High Performance Computing), the MPI (Message Passing Interface) has been the main tool for parallel message passing programming model of most programmers [1,3]. (i.e. before multicore, dual socket servers provided two processors like today’s dual core processors) Programming in this environment is essentially a mater of using POSIX threads. OpenMP was developed to give programmers a higher level of abstraction and make thread programming easier. Accordance to multicore trend growth, parallel programming using OpenMP gains popularity between HPC developers. Together with the growth of thread programming model on shared memory machines, MPI which has been intended for parallel distributed systems since MPI-1, has improved to support shared memory systems. MPI performance for shared memory systems will be tested on cluster of shared memory machines. OpenMP will be used as a reference on the same multicore systems with MPI clusters (both MPI and OpenMP will have an equal amount of core workes)[2]
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal of Advanced Computer Science and Applications
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.