Abstract

Consistency properties provided by most key-value stores can be classified into sequential consistency and eventual consistency. The former is easier to program with but suffers from lower performance whereas the latter suffers from potential anomalies while providing higher performance. We focus on the problem of what a designer should do if he/she has an algorithm that works correctly with sequential consistency but is faced with an underlying key-value store that provides a weaker (e.g., eventual or causal) consistency. We propose a detect-rollback based approach: The designer identifies a correctness predicate, say P, and continues to run the protocol, as our system monitors P. If P is violated (because the underlying key-value store provides a weaker consistency), the system rolls back and resumes the computation at a state where P holds.We evaluate this approach with graph-based applications running on the Voldemort key-value store. Our experiments with deployment on Amazon AWS EC2 instances show that using eventual consistency with monitoring can provide a 50–80% increase in throughput when compared with sequential consistency. We also observe that the overhead of the monitoring itself was low (typically less than 4%) and the latency of detecting violations was small. In particular, in a scenario designed to intentionally cause a large number of violations, more than 99.9% of violations were detected in less than 50 ms in regional networks (all clients and servers in the same Amazon AWS region) and in less than 3 s in global networks.We find that for some applications, frequent rollback can cause the program using eventual consistency to effectively stall. We propose alternate mechanisms for dealing with re-occurring rollbacks. Overall, for applications considered in this paper, we find that even with rollback, eventual consistency provides better performance than using sequential consistency.

Highlights

  • Distributed key-value data stores have gained increasing popularity due to their simple data model and high performance [1]

  • If the number of violations is beyond a certain threshold, clients may conclude that the cost of rollback is too high and, they can move to sequential consistency

  • While this causes one to lose the benefits of an eventual consistent key-value store, there would be no need for rollback or monitoring

Read more

Summary

Introduction

Distributed key-value data stores have gained increasing popularity due to their simple data model and high performance [1]. The monitors run predicate detection algorithm based on the information received to determine if the global predicate of interest P has been violated. A candidate sent to the monitor of predicate Pi consists of an HVC interval and a partial copy of the server local state containing variables relevant to Pi. The HVC interval is the time interval on the server when Pi is violated, and the local state has the values of variables which make ¬Pi true. The task of a monitor is to determine if some smaller predicate Pi under its responsibility is violated, i.e., to detect if a consistent state on which ¬Pi is true exists in the system execution. For a more detailed discussion of linear and semi-linear predicates, we refer to [14]

5: Initialization: 6
14: Write new values to data-store
Evaluation results and discussion
Discussion
Related work
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call