Abstract
The deployment of the primary storage deduplication system at the file level is a daunting challenge due to disk-bottleneck and data fragmentation problems. File semantics such as type and size can be used to reduce deduplication overhead. There is negligible data redundancy across different types of files compared to the same type of files. Hence, the application of the same deduplication method irrespective of file type causes wastage of computing resources. In this paper, the File Aware DeDuplicaton system is proposed. Files are partitioned based on the size as small and large files. Large files are categorized based on the data redundancy as high, low, and unpredictable file types. File type-specific deduplication approach is applied for each category of files. Separate index tables are maintained for each type and hash table for small files of all types. The FADD system is simulated in the Linux environment using two different types of FIU traces and some locally collected data sets. The effectiveness of FADD is compared with the full deduplication system and Hybrid Deduplication System based on the parameters – metadata access overhead, average segment length, and response time. The experimental results show that the system has performed consistently better for all input data sets.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.