Abstract

MPI has often been called the “assembly language” of parallel programming. In fact, MPI has succeeded because, like other successful but low-level programming models, it provides support for both performance programming and for “programming in the large” – building support tools, such as software libraries, for large-scale applications. Nevertheless, MPI programming can be challenging, particularly if approached as a replacement for shared-memory style load/store programming. By looking at some representative programming tasks, this talk looks at ways to improve the productivity of parallel programmers by identifying the key communities and their needs, the strengths and weaknesses of the MPI programming model and the implementations of MPI, and opportunities for improving productivity both through the use of tools that leverage MPI and through extensions of MPI.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.