Abstract

To address this storage issue, we propose a Content-Aware Deduplication Clustering Analysis for Cloud Storage Optimization (CADC-FPRLE) based on a file partitioning running length encoder. At first, preprocessing was done by indexing, counting terms, cleansing, and tokenizing. Further multi-objective clustering points are analysed based on the bisecting divisible partition block, which divides a set of documents. The count terms are filtered from the divisible blocks and make up the count terms content block. Using Content-Aware Multi-Hash Ensemble Clustering (CAMH-EC) to group the similar blocks into clusters. This creates a high-dimensional Euclidean interval to create the number of clusters, and points are performed randomly to set the initial collection. Then, the Magnitude Vector Space Rate (MVSR) estimates the similarity distance between the groups to select the highest scatter value content for indexing. Finally, the Running Block Parity Encoder (RBPE) generates similarity parity in order to reduce the content to a redundant, singularized file in order to optimise storage. This implementation proves a higher level of storage optimization compared to the previous system than other methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call