Abstract

Cloud computing is an emerging technology that is increasingly being appreciated for its diverse uses, encompassing data processing, The Internet of Things (IoT) and the storing of data. The continuous growth in the number of cloud users and the widespread use of IoT devices have resulted in a significant increase in the volume of data being generated by these users and the integration of IoT devices with cloud platforms. The process of managing data stored in the cloud has become more challenging to complete. There are numerous significant challenges that must be overcome in the process of migrating all data to cloud-hosted data centers. High bandwidth consumption, longer wait times, greater costs, and greater energy consumption are only some of the difficulties that must be overcome. Cloud computing, as a result, is able to allot resources in line with the specific actions made by users, which is a result of the conclusion that was mentioned earlier. This phenomenon can be attributed to the provision of a superior Quality of Service (QoS) to clients or users, with an optimal response time. Additionally, adherence to the established Service Level Agreement further contributes to this outcome. Due to this circumstance, it is of utmost need to effectively use the computational resources at hand, hence requiring the formulation of an optimal approach for task scheduling. The goal of this proposed study is to find ways to allocate and schedule cloud-based virtual machines (VMs) and tasks in such a way as to reduce completion times and associated costs. This study presents a new method of scheduling that makes use of Q-Learning to optimize the utilization of resources.The algorithm's primary goals include optimizing its objective function, building the ideal network, and utilizing experience replay techniques.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call