Abstract

Due to the ever-changing system states and various user demands, resource allocation in cloud data center is faced with great challenges in dynamics and complexity. Although there are solutions that focus on addressing this problem, they cannot effectively respond to the dynamic changes of system states and user demands since they depend on the prior knowledge of the system. Therefore, it is still an open challenge to realize automatic and adaptive resource allocation in order to satisfy diverse system requirements in cloud data center. To cope with this challenge, we propose an advantage actor-critic based reinforcement learning (RL) framework for resource allocation in cloud data center. First, the actor parameterizes the policy (allocating resources) and chooses continuous actions (scheduling jobs) based on the scores (evaluating actions) from the critic. Next, the policy is updated by gradient ascent and the variance of policy gradient can be significantly reduced with the advantage function. Simulations using Google cluster-usage traces show the effectiveness of the proposed method in cloud resource allocation. Moreover, the proposed method outperforms classic resource allocation algorithms in terms of job latency and achieves faster convergence speed than the traditional policy gradient method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.