Abstract

The main aim of this survey is to explore the existing replication strategies in cloud database so that the researchers can include all the necessary metrics in their works in this domain and the limitation s of the existing ones can be overcome. Cloud computing is a promising paradigm that provides computing resources as a service over a network. A number of data replication approaches have been presented for data cloud in the earlier decades. All replication technique access some attributes such as fault tolerance, scalability, reliability, performance, storage consumption, data access time etc. In this review, diverse issues included in data replication methodologies is distinguished and distinctive replication procedures are study to discover which attributes is tended to in a given and which is ignored. To categorize the techniques, all articles that had the word “dynamic data replication” in its title or as its keyword published between January 2003 to December 2014, is first selected from the scientific journals: IEEE, Elsevier, Springer and international journals. Here, we categorize the research based on three dissimilar perspectives, like features utilized application utilized and parameter measure. In addition, this study gives an elaborate idea about cloud computing based dynamic data replication.

Highlights

  • Cloud computing: Cloud computing is turning out to be a considerably significant technique, which aids the computing services to be employed all around the globe

  • The various attributes related to cloud computing include the unambiguousness in the resource allotment procedure and service rendering amenities

  • They executed CDRM in Hadoop Distributed File System (HDFS) and experiment effects show that their CDRM was cost efficient and outperforms default replication management of HDFS in terms of presentation and load balancing for large-scale cloud storage

Read more

Summary

INTRODUCTION

Cloud computing: Cloud computing is turning out to be a considerably significant technique, which aids the computing services to be employed all around the globe. By regulating model number and location according to workload changing and node capacity, CDRM was vigorously re-distributing workloads among data nodes in the assorted cloud They executed CDRM in Hadoop Distributed File System (HDFS) and experiment effects show that their CDRM was cost efficient and outperforms default replication management of HDFS in terms of presentation and load balancing for large-scale cloud storage. They monitored that the overloaded and dynamic requests for some famous images result in degradation and fluctuation of performance and accessibility of the system Addressing this matter, they made cleared a stochastic model based on queueing theory, which incarcerates the main factors in image provisioning to optimize the number and placement of image replication, so as to supervise the VM images in a cost-effective manner. They carry out a largescale assessment by means of real-world file system traces and show that CAROM outperforms replication based schemes in storage cost by up to 60% and crossing out coded schemes in bandwidth cost by up to 43%, while upholding low access latencies close to those in replication based schemes

In Cloud Environment the Autonomic Data
Limited storage no
Storage capacity Reduction
MinCopysets on HDFS no no
Execution time
Findings
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call