Abstract

Parallel applications in the area of scientific computing are often designed in a data parallel SPMD (single program multiple data) style based on the MPI (message passing interface) standard. The advantage of this method is a clear programming model but on large parallel platforms or cluster systems, the speedup and scalability can be limited especially when collective communication operations are used frequently. The combination of task and data parallelism can improve the scalability of many applications but requires a more intricate program development. The combination of task and data parallelism can lead to an improvement of speedup and scalability for parallel applications on distributed memory machines. To support a systematic design of mixed task and data parallel programs, the TwoL model has been introduced. A key feature of this model is the development support for applications, using multiprocessor tasks on top of data parallel modules. This chapter discusses implementation issues of the TwoL model as an open framework. It focuses on the design of the framework and its internal algorithms and data structures. As examples, fast parallel matrix multiplication algorithms are discussed to illustrate the applicability of approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call