Abstract

Distributed file systems need to provide for fault tolerance. This is typically achieved with the replication of files. Existing approaches to the construction of replicated file systems sacrifice strong semantics (i.e. the guarantees the systems make to running computations when failures occur and/or files are accessed concurrently). This is done mainly for efficiency reasons. This paper puts forward a replicated file system protocol that enforces strong consistency semantics. Enforcing strong semantics allows for distributed systems to behave more like their centralized counterparts-an essential feature in order to provide the transparency that is so strived for in distributed computing systems. One characteristic of our protocol is its distributed nature. Because of it, the extra cost needed to ensure the stronger consistency is kept low (since the bottleneck problem noticed in primary-copy systems is avoided), load balancing is facilitated, clients can choose physically close servers, and the work required during failure handling and recovery is reduced. Another characteristic is that, instead of optimizing each operation type on its own, file system activity is viewed at the level of a file session and the costs of individual operations were able to be spread over the life of a file session. We have developed a prototype and compared its performance to both NFS and a nonreplicated version of the prototype that also achieves strong consistency semantics. Through these comparisons, the cost of replication and the cost of enforcing the strong consistency semantics are shown.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call