Abstract
Automatic parallelization of a sequential code is about finding parallel segments in the code and executing these segments parallely by sending them to different computers in a grid. Basically, parallel segments in the code can be found by doing block level analysis, instruction level analysis or function level analysis. Block is any continuous part of the code that performs a particular task. This paper talks about a hybrid approach that combines the block level analysis with functional level analysis for parallelization of sequential code and its illustrates its advantages over block level parallelization and function level parallelization performed independently. In this approach, segments of code are identified as basic blocks. These blocks are analyzed to identify them as parallelizable or dependent. Loops which are also identified as blocks are parallelized using existing loop parallelization techniques [1]. This information would be used for automatic parallel processing of the set of independent blocks on different nodes in the grid using Message Passing Interface(MPI). The system will annotate the MPI library functions to the program at appropriate positions in the source code to proceed with the automatic parallelization and execution of the program.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.