Deduplication has been principally employed in distributed storage systems to improve storage space efficiency. Traditional deduplication research ignores the design specifications of shared-nothing distributed storage systems such as no central metadata bottleneck, scalability, and storage rebalancing. Likewise, inline deduplication integration poses serious threats to storage system read/write performance, consistency, and scalability. Mainly, this is due to ineffective and error-prone deduplication metadata, duplicate lookup I/O redirection, and placement of content fingerprints and data chunks. Further, transaction failures after deduplication integration often render inconsistencies in data chunks, deduplication metadata, and garbage data chunks. results in rendering inconsistencies in data chunks, deduplication metadata, and garbage data chunks. In this paper, we propose G rate , a high-performance inline cluster-wide data deduplication, complying with the design constraints of shared-nothing storage systems. In particular, G rate eliminates duplicate copies across the cluster for high storage space efficiency without jeopardizing performance. We employ a distributed deduplication metadata shard , which promises high-performance deduplication metadata and duplicate fingerprint lookup I/Os without introducing a single point of failure. The placement of data and deduplication metadata is made cluster-wide based on the content fingerprint of chunks. We decouple the deduplication metadata shard from read I/O path and replace it with a read manifestation object to further speedup read performance. To guarantee deduplication-enabled transaction consistency and efficient garbage identification, we design a flag-based asynchronous consistency scheme , capable of repairing the missing data chunks on duplicate arrival. We design and implement G rate in Ceph. The evaluation shows an average of 18% performance bandwidth improvement over the content addressable deduplication approach at smaller chunk sizes, i.e., less than 128KB while maintaining high storage space savings.
Read full abstract