Abstract

Consistency is one of the fundamental issues of distributed computing. There are many competing consistency models, with subtly different power in principle. In practice, the well known the Consistency-Availability-Partition Tolerance trade-off translates to difficult choices between fault tolerance, performance, and programmability. The issues and trade-offs are particularly vexing at scale, with a large number of processes or a large shared database, and in the presence of high latency and failure-prone networks. It is clear that there is no one universally best solution. Possible approaches cover the whole spectrum between strong and eventual consistency. Strong consistency (linearizability or serializability, achieved via total ordering) provides familiar and intuitive semantics but requires slow and fragile synchronization and coordination overheads. The unlimited parallelism allowed by weaker models such as eventual consistency promises high performance, but divergence and conflicts make it difficult to ensure useful application invariants, and metadata is hard to keep in check. The research and development communities are actively exploring intermediate models (replicated data types, monotonic programming, CRDTs, LVars, causal consistency, red-blue consistency, invariant- and proof-based systems, etc.), designed to improve efficiency, programmability, and overall operation without negatively impacting scalability.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call