Abstract
Task-parallel problems are difficult to implement efficiently in parallel because they are asynchronous and unpredictable. The difficulties are compounded on distributedmemory computers where interprocessor communication can impose a substantial overhead. A few languages and libraries have been proposed that are specifically designed to support this kind of computation. However, one big challenge still remains: to make those tools understood and used by scientists. engineers, and others who want to exploit the power of parallel computers without spending much effort in mastering those tools. The PMESC programming paradigm and library presented here are designed to make programming on distributed-memory computers easy to understand and to make efficient parallel code easy to produce. The paradigm provides a methodology for structuring task-parallel problems that allows the separation of different phases in the computation. The library provides support for those phases that are application-independent allowing the users to concentrate on the applicationspecific oues.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.