Abstract

Data deduplication is a data reduction technology that is worked by detecting and eliminating data redundancy and keep only one copy of these data, and is often used to reduce the storage space and network bandwidth. While our main motivation has been low band-width synchronization applications such as Low Bandwidth Network File System (LBNFS), deduplication is also useful in archival file systems. A number of researchers have advocated a scheme for archival. Data deduplication now is one of the hottest research topics in the backup storage area. In this paper, A survey on different chunking algorithms of data deduplication are discussed, and studying the most popular used chunking algorithm Two Threshold Two Divisor (TTTD), and evaluated this algorithm using three different hashing functions that can be used with it (Rabin Finger print, Adler, and SHA1) implemented each one as a fingerprinting and hashing algorithm and then compared the execution time and deduplication elimination ratio which was the first time this comparison performed and the result is shown below.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call