Abstract

The expanding scale of cloud data centers and the diversification of user services have led to an increase in energy consumption and greenhouse gas emissions, resulting in long-term detrimental effects on the environment. To address this issue, scheduling techniques that reduce energy usage have become a hot topic in cloud computing and cluster management. The Deep Reinforcement Learning (DRL) approach, which combines the advantages of Deep Learning and Reinforcement Learning, has shown promise in resolving scheduling problems in cloud computing. However, reviews of the literature on task scheduling that employ DRL techniques for reducing energy consumption are limited. In this paper, we survey and analyze energy consumption models used for scheduling goals, provide an overview of the DRL algorithms used in the literature, and quantitatively compare the model differences of Markov Decision Process elements. We also summarize the experimental platforms, datasets, and neural network structures used in the DRL algorithm. Finally, we analyze the research gap in DRL-based task scheduling and discuss existing challenges as well as future directions from various aspects. This paper contributes to the correlation perspective on the task scheduling problem with the DRL approach and provides a reference for in-depth research on the direction of DRL-based task scheduling research. Our findings suggest that DRL-based scheduling techniques can significantly reduce energy consumption in cloud data centers, making them a promising area for further investigation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call