Abstract

Data replication technologies in distributed storage systems introduce the problem of data consistency. For high performance, data replication systems often settle for weak consistency models, such as Pipelined-RAM consistency. To determine whether a data replication system provides Pipelined-RAM consistency, we study the problem of verifying Pipelined-RAM consistency over read/write traces (VPC, for short). Four variants of VPC (labeled VPC-SU, VPC-MU, VPC-SD, and VPC-MD) are identified according to whether there are Multiple shared variables (or one Single variable) and whether write operations can assign Duplicate values (or only Unique values) to each shared variable. We prove that VPC-SD is $\sf {NP}$ -complete (so is VPC-MD) by reducing the strongly $\sf {NP}$ -complete problem 3-Partition to it. For VPC-MU, we present the Read-Centric algorithm with time complexity $O(n^4)$ , where $n$ is the number of operations. The algorithm constructs an operation graph by iteratively applying a rule which guarantees that no overwritten values can be read later. It incrementally processes all the read operations one by one, and exploits the total order between the dictating writes on the same variable to avoid redundant applications of the rule. The experiments have demonstrated its practical efficiency and scalability.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call