Abstract

Data deduplication is a popular dictionary based compression method in storage archival and backup.The deduplication efficiency (``chunk'' matching) improves for smaller chunk sizes, however the files become highly fragmented requiring many disk accesses during reconstruction or chattinessin a client-server architecture. Within the sequence of chunks that an object (file) is decomposed into, sub-sequences of adjacent chunks tend to repeat. We exploit this insight to optimize the chunk sizes by joining repeated sub-sequences of small chunks into new ``super chunks'' with the constraint to achieve practically the same matching performance. We employ suffix arrays to find these repeating sub-sequences and to determine a new encoding that covers the original sequence.With super chunks we significantly reduce fragmentation, improving reconstruction time and the overall deduplication ratio by lowering the amount of metadata (fewer hashes and dictionary entries).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call