Abstract

Abstract : Shared memory parallel computers have the reputation for being the easiest type of parallel computers to program. At the same time, they are frequently regarded as being the least scalable type of parallel computer. In particular, shared memory parallel computers are frequently programmed using a form of loop-level parallelism (usually based on some combination of compiler directives and automatic parallelization). However, in discussing this form of parallelism, the experts in the field routinely say that it will not scale past 4-16 processors (the number varies among experts). This report investigates what the true limitations are to this type of parallel programming. The discussions are largely based on the experiences that the authors had in porting the Implicit Computational Fluid Dynamics Code (F3D) to numerous shared memory systems from SGI, Cray, and Convex.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call