Abstract

Nowadays, millions of telecommunication devices, IoT sensors, and web services, especially social media sites, are producing big data every second. Such applications with massive data-generating capabilities need to access these data quickly. Among other approaches, cloud computing provides content delivery networks, which utilize data replication for better latency to such real-time applications. Fast processing, storage, and timely analysis of these data are the challenge for most of these future Internet applications. However, cloud computing is the ultimate storage and processing paradigm to resolve such issues and to deal with big data, with the same speed in which data are being produced. Moreover, cloud computing technology has been evolved with the tag line of “Everything as A Service” now, and for powering all these services, provision of data is a compulsory and significant task. To provide end user with easy and fast access to data, the cloud maintains backup and replicates the copies in multiple data centers. The geographical locations of these data centers where data are being placed have a profound impact on data access time. To deal with the challenge of effective data replication and to reduce data access time to a minimum, we propose a genetic algorithm (GA)-based technique to suggest and store data on nearly located data centers. The proposed algorithm improves the access time, and thus, the efficiency of cloud servers by providing quality of service (QoS) to end user.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call