Abstract

In the cloud computing environment, task scheduling with multiple objectives optimization becomes a highly challenging problem in such a dynamic and bursty environment. Previous studies have mostly emphasized assigning the incoming tasks in a specific scenario, with a weak generalization ability to various objectives automatically. Thus, they suffer the inefficient issue under large-scale and heterogeneous cloud workloads. To address this issue, we propose a deep reinforcement learning (DRL)-based intelligent cloud task scheduler, which makes the optimal scheduling decision only dependent on learning directly from its experience without any prior knowledge. We formulate task scheduling as a dynamical optimization problem with constraints and then adopt the deep deterministic policy gradients (DDPG) network to find the optimal task assignment solution while meeting the performance and cost constraints. We propose a correlation-aware state representation method to capture the inherent characteristics of demands, and a dual reward model is designed to learn the optimal task allocation strategy. Extensive experimental results on Alibaba cloud workloads show that compared with other existing solutions, our proposed DDPG-based task scheduler enjoy superiority and effectiveness in performance and cost optimization.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.