Abstract

The large number of processing elements in current parallel systems necessitates the development of more comprehensive and realistic tools for the scalability analysis of algorithms on those architectures. This paper presents a simple analytical tool with which to study the scalability of parallel algorithm-architecture combinations. Our practical method studies separately execution time, efficiency, and memory usage in the accuracy-critical scaling model, where the problem size-input data set size-increases with the number of processors, which is the relevant one in many situations. The paper defines quantitative and qualitative measurements of the scalability and derives important relationships between execution time and efficiency. For example, results show that the best way to scale the system (to deteriorate as little as possible the properties of the system) is by maintaining constant execution time. These analytical results are verified with one candidate application for massive parallel computers: the full multigrid method. We study the scalability of a general d-dimensional full multigrid method on an r-dimensional mesh of processors. The analytical expressions are verified through experimental results obtained by implementing the full multigrid method on a Transputer-based machine and on the CRAY T3D.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call