Abstract

The numerical solution of differential equations is an important problem in the natural sciences and engineering. But the computational effort to find a solution with the desired accuracy is usually quite large. This suggests the use of powerful parallel machines which often use a distributed memory organization. In this article, we present a parallel programming methodology to derive structured parallel implementations of numerical methods that exhibit two levels of potential parallelism, a coarse-grain method parallelism and a medium grain parallelism on data or systems. The derivation process is subdivided into three stages: The first stage identifies the potential for parallelism in the numerical method, the second stage fixes the implementation decisions for a parallel program and the third stage derives the parallel implementation for a specific parallel machine. The derivation process is supported by a group-SPMD computational model that allows the prediction of runtimes for a specific parallel machine. This enables the programmer to test different alternatives and to implement only the most promising one. We give several examples for the derivation of parallel implementations and of the performance prediction. Experiments on an Intel iPSC/860 confirm the accuracy of the runtime predictions. The parallel programming methodology separates the software issues from the architectural details, enables the design of well-structured, reusable and portable software and supplies a formal basis for automatic support.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call