Abstract

The evolution in parallel programming languages is toward implicit parallelism, and toward virtual parallelism: Explicitly coding for parallelism is to be avoided; coding for the physical machine size is a low-level programming practice to be overcome as soon as possible. Our examples indicate this may not be possible in general — although it might well be a realistic alternative for many numerical codes with simple structure. Much emphasis is now put on data-parallel languages, where parallelism is implied from the use of aggregate operations on data aggregate (mostly array operations on data arrays); parallelism is derived from parallel execution of these aggregate operations or derived from a data partition. Our examples imply that control parallelism, where parallelism is derived from explicit user allocation of operations to (virtual or physical) processors is necessary to express certain algorithms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.