Abstract
To maximize the writing throughput of the deduplication system, most deduplication systems and deduplication clusters sequentially store new chunks in disk. This method results in data fragments as the deduplication system grows. It is important to analyse the data fragments in the deduplication system and to understand its features. We analyse the features of data fragments in deduplication system using three datasets from real world. We utilize File Fragment Degree (FFD) to quantize the data fragments of a file in deduplication system. We firstly implement Extreme Binning (EB) to collect the chunk addresses of every file in the dataset. Then, we design a FFD analyser to compute FFD for every file according to its chunk addresses and sizes. Finally, we analyse the FFD numbers. As far as we know, this is the first research on the analysis of data fragments in deduplication system. Our findings show that: 1) there are a large mount of data fragments in deduplication system for various datasets; 2) for enterprise backup data, the amount of data fragments increases rapidly as the deduplication system grows; 3) for dataset mainly containing small files, the amount of data fragments increases slowly as the deduplication system grows.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.