Abstract

Very large block-level data backup systems need scalable data deduplication and garbage collection techniques to make efficient use of the storage space and to minimize the performance overhead of doing so. Although the deduplication and garbage collection logic is conceptually straight-forward, their implementations pose a significant technical challenge because only a small portion of their associated data structures could fit into memory. In this paper, we describe the design, implementation and evaluation of a data deduplication and garbage collection engine called Sungem that is designed to remove duplicate blocks in incremental data backup streams. Sungem features three novel techniques to maximize the deduplication throughput without compromising the deduplication ratio. First, Sungem puts related fingerprint sequences, rather than fingerprints from the same backup stream, into the same container in order to increase the fingerprint prefetching efficiency. Second, to make the most of the memory space reserved for storing fingerprints, Sungem varies the sampling rates for fingerprint sequences based on their stability. Third, Sungem combines reference count and expiration time in a unique way to arrive at the first known incremental garbage collection algorithm whose bookkeeping overhead is proportional to the size of a disk volume's incremental backup snapshot rather than its full backup snapshot. We evaluated the Sungem prototype using a real-world data backup trace, and showed that the average throughput of Sungem is more than 200,000 fingerprint lookups per second on a standard X86 server, including the garbage collection cost.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call