Abstract

The porting of codes using the Message Passing Interface (MPI) between distributed memory platforms becomes as simple as moving the program to the target machine and recompiling. For codes written under the shared memory paradigm to be able to take advantage of this easy porting, they must first be translated to execute under a distributed memory environment. The author focuses on the translation of non-numeric parallel algorithms from shared memory to distributed memory machines. Specifically, he presents techniques to determine where calls to MPI message passing routines must be inserted to preserve data access patterns inherent in the original shared memory code.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call