Abstract

With the increase in the advent of parallel computing, it has become necessary to write OpenMP programs to achieve better speedup and to exploit parallel hardware efficiently. However, to achieve this, the programmers are required to understand OpenMP directives and clauses, the dependencies in their code, etc. A small mistake made by them, such as wrongly analysing a dependency or wrong data scoping of a variable, can result in an incorrect or inefficient program. In this paper, we propose a system which can automate the process of parallelization of a serial C code. The system accepts a serial program as input and generates the corresponding parallel code in OpenMP without altering the core logic of the program. The system has used different data scoping and work sharing constructs available in OpenMP platform.The system designed here aims at parallelizing “for” loops, “while” loops, nested “for” loops and recursive structures.The system has parallelized “for” loop by considering the induction variable. And converted “while” loop to “for” loop for parallelization. The system is tested by providing several programs such as matrix addition, quick sort, linear search etc. as input. The execution time of programs before and after parallelization is determined and a graph is plotted to help visualize the decrease in execution time.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call