Abstract

Studies of resource provision in cloud computing have drawn extensive attention, since effective task scheduling solutions promise an energy-efficient way of utilizing resources while meeting diverse requirements of users. Deep reinforcement learning (DRL) has demonstrated its outstanding capability in tackling this issue with the ability of online self-learning, however, it is still prevented by the low sampling efficiency, poor sample validity, and slow convergence speed especially for deadline constrained applications. To address these challenges, an Imitation Learning Enabled Fast and Adaptive Task Scheduling (ILETS) framework based on DRL is proposed in this paper. First, we introduce behavior cloning to provide a well-behaved and robust model through Offline Initial Network Parameters Training (OINPT) so as to guarantee the initial decision-making quality of DRL. Next, we design a novel Online Asynchronous Imitation Learning (OAIL)-based method to assist the DRL agent to re-optimize its policy and to against the oscillations caused by the high dynamic of the cloud, which promises DRL agent moving towards the optimal policy with a fast and stable process. Extensive experiments on the real-world dataset have demonstrated that the proposed ILETS can consistently produce shorter response time, lower energy consumption and higher success rate than the baselines and other state-of-the-art methods at the accelerated convergence speed.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.