Abstract

To maximize the writing throughput of the deduplication system, most deduplication systems and deduplication clusters sequentially store new chunks in disk. This method results in data fragments as the deduplication system grows. It is important to analyse the data fragments in the deduplication system and to understand its features. We analyse the features of data fragments in deduplication system using three datasets from real world. We utilize File Fragment Degree (FFD) to quantize the data fragments of a file in deduplication system. We firstly implement Extreme Binning (EB) to collect the chunk addresses of every file in the dataset. Then, we design a FFD analyser to compute FFD for every file according to its chunk addresses and sizes. Finally, we analyse the FFD numbers. As far as we know, this is the first research on the analysis of data fragments in deduplication system. Our findings show that: 1) there are a large mount of data fragments in deduplication system for various datasets; 2) for enterprise backup data, the amount of data fragments increases rapidly as the deduplication system grows; 3) for dataset mainly containing small files, the amount of data fragments increases slowly as the deduplication system grows.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call