Abstract

With the advantages such as high-performance, low-maintenance, and reliability, more and more companies are moving their computing infrastructures to the cloud. In the meantime, with the increasing number of users continuously submitting jobs to cloud, energy consumed by the current cloud data centers has become a major concern for cloud service providers, due to financial and environmental reasons. In this paper, we propose a deep reinforcement learning (DRL) approach to handle real-time jobs. Specifically, we focus on allocating incoming jobs to appropriate virtual machines (VMs) in a way that energy consumption can be optimized while high quality of service (QoS) can be achieved. We give the detailed design and implementation of our approach, and our experimental results demonstrate that the proposed method can achieve better performance in job success rate and average response time with less energy consumption than the current approaches, in the presence of different real-time cloud workloads.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call