Abstract

As is well known any algorithm correct in an asynchronous shared memory setting (physically shared memory) can be directly applied in distributed shared memory (DSM) systems provided that the latter guarantees strong consistency (atomic or sequential) of replicas. Generally however, in DSM systems, weaker consistency models (causal, processor, PRAM, etc.) are often considered to improve the performance. A weakening consistency model may however imply the incorrectness of the algorithm. So we face a consistency requirement problem, the problem of finding the weakest consistency model of DSM that is sufficient and necessary for algorithm correctness. We consider a reliable DSM environment, and present a complex consistency model comprising three elementary models: sequential consistency, coherence and PRAM consistency. This complex model is then applied to Dijsktra's (1965) algorithm for mutual exclusion of n processes, one of the first solutions to a fundamental problem in both centralised and distributed operating systems. In the resulting algorithm, coherence and PRAM consistency are associated with some write operations performed at shared memory locations. As concurrent execution of write operations with weaker consistency models is more efficient when compared to the execution of strongly consistent operations, the proposed solution reduces synchronisation delay (mutual exclusion overhead) and thereby increases system throughput. The presented model is proven to be sufficient for algorithm correctness. Moreover, the algorithm is shown to be optimal in the sense that further relaxation of any write operations semantics violates progress (liveness) or safety of the algorithm.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call