Abstract

Content-Defined Chunking (CDC) detect maximum redundancy in data deduplication systems in the past years. In this research work, we focus on optimizing the deduplication system by adjusting the pertinent factors in content defined chunking (CDC) to identify as the key ingredients by declaring chunk cut-points and efficient fingerprint lookup using bucket based index partitioning. For efficient chunking, we propose Genetic Evolution (GE) algorithm based approach which is optimized Two Thresholds Two Divisors (TTTD-P) CDC algorithm where we significantly reduce the number of computing operations by using single dynamic optimal parameter divisor D with optimal threshold value exploiting the multi-operations nature of TTTD. To reduce the chunk-size variance, TTTD algorithm introduces an additional backup divisor D' that has a higher probability of finding cut-points. However, adding an additional divisor decreases the chunking throughput, meaning that TTTD algorithm aggravates Rabin's CDC performance bottleneck. To this end, Asymmetric Extremum (AE) significantly improves chunking throughput while providing comparable deduplication efficiency by using the local extreme value in a variable-sized asymmetric window to overcome the Rabin, MAXP and TTTD boundaries-shift problem. FAST CDC in the year 2016 is about 10 times faster than unimodal Rabin CDC and about 3 times faster than Gear and Asymmetric Extremum (AE) CDC, while achieving nearby the same deduplication ratio (DR). Therefore, we propose GE based TTTD-P optimized chunking to maximize chunking throughput with increased DR; and bucket indexing approach reduces hash values judgement time to identify and declare redundant chunk about 16 times than unimodal baseline Rabin CDC, 5 times than AE CDC, 1.6 times than FAST CDC. Our experimental results comparative analysis reveals that TTTD-P using fast BUZ rolling hash function with bucket indexing on Hadoop Distributed File System (HDFS) provide a comparatively maximum redundancy detection with higher throughput, higher deduplication ratio, lesser computation time and very low hash values comparison time as being best data deduplication for distributed big data storage systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call