Abstract

In Concurrent ML, synchronization abstractions can be defined and passed as values, much like functions in ML. This mechanism admits a powerful, modular style of concurrent programming, called higher-order concurrent programming . Unfortunately, it is not clear whether this style of programming is possible in languages such as Concurrent Haskell, that support only first-order message passing. Indeed, the implementation of synchronization abstractions in Concurrent ML relies on fairly low-level, language-specific details. In this paper we show, constructively, that synchronization abstractions can be supported in a language that supports only first-order message passing. Specifically, we implement a library that makes Concurrent ML-style programming possible in Concurrent Haskell. We begin with a core, formal implementation of synchronization abstractions in the π-calculus. Then, we extend this implementation to encode all of Concurrent ML's concurrency primitives (and more!) in Concurrent Haskell. Our implementation is surprisingly efficient, even without possible optimizations. In several small, informal experiments, our library seems to outperform OCaml's standard library of Concurrent ML-style primitives. At the heart of our implementation is a new distributed synchronization protocol that we prove correct. Unlike several previous translations of synchronization abstractions in concurrent languages, we remain faithful to the standard semantics for Concurrent ML's concurrency primitives. For example, we retain the symmetry of choose, which can express selective communication. As a corollary, we establish that implementing selective communication on distributed machines is no harder than implementing first-order message passing on such machines.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call