Abstract
With the continuous and exponential increase of the number of users and the size of their data, data deduplication becomes more and more a necessity for cloud storage providers. By storing a unique copy of duplicate data, cloud providers greatly reduce their storage and data transfer costs. These huge volumes of data need some practical platforms for the storage, processing and availability and cloud technology offers all the potentials to fulfill these requirements. Data deduplicationis referred to as a strategy offered to data providers to eliminate the duplicate data and keeps only a single unique copy of it for storage space saving purpose. This paper, presents a scheme that permits a more fine-grained trade-off. The intuition is that outsourced data may require different levels of protection, depending on how popular content is shared by many users. A novel idea is presented that differentiates data according to their popularity. Based on this idea, an encryption scheme is designed that guarantees semantic security for unpopular data and also provides the higher level security to the cloud data. This way, data de-duplication can be effective for popular data, whilst semantically secure encryption protects unpopular content. Also, the backup recover system can be used at the time of blocking and also analyze frequent login access system.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal of Scientific Research in Computer Science, Engineering and Information Technology
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.