Abstract

Distributed memory multiprocessors and network clusters are being used increasingly as parallel computing resources due to their scalability and cost/performance advantages. However, it is generally believed that shared memory parallel programming is easier than explicit message passing programming. Although the generative communication model provides scalability like message passing and the simplicity of shared memory programming, it is a challenge to effectively implement this model on machines with physically distributed memories. This paper describes the issues involved in implementing the essential component of generative communication, the shared data space abstraction called tuplespace, on a distributed memory machine. The paper gives a detailed description of Deli, a UNIX-based distributed tuplespace implementation for a network of workstations. This description, along with discussions of implementation alternatives, provides a detailed basis for designers and implementors of shared data spaces, not currently available in the literature. © 1997 John Wiley & Sons, Ltd.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call