Abstract

The ever-expanding scale of cloud datacenters necessitates automated resource provisioning to best meet the requirements of low latency and high energy-efficiency. However, due to the dynamic system states and various user demands, efficient resource allocation in cloud faces huge challenges. Most of the existing solutions for cloud resource allocation cannot effectively handle the dynamic cloud environments because they depend on the prior knowledge of a cloud system, which may lead to excessive energy consumption and degraded Quality-of-Service (QoS). To address this problem, we propose an adaptive and efficient cloud resource allocation scheme based on Actor-Critic Deep Reinforcement Learning (DRL). First, the actor parameterizes the policy (allocating resources) and chooses actions (scheduling jobs) based on the scores assessed by the critic (evaluating actions). Next, the resource allocation policy is updated by using gradient ascent while the variance of policy gradient is reduced with an advantage function, which improves the training efficiency of the proposed method. We conduct extensive simulation experiments using real-world data from Google cloud datacenters. The results show that our method can obtain the superior QoS in terms of latency and job dismissing rate with enhanced energy-efficiency, compared to two advanced DRL-based and five classic cloud resource allocation methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call