Abstract

Task scheduling based on temperature perception is beneficial for avoiding hotspots and optimizing the internal temperature distribution of data centers. However, the accuracy of task scheduling largely depends on the accuracy of temperature prediction. There are many features that affect the accuracy of temperature prediction in data centers, and the variation periods of these features vary greatly. Traditional machine learning models are difficult to accurately fit them. Therefore, this article proposes a step-by-step temperature prediction algorithm based on Gated Recurrent Unit (GRU). This algorithm establishes prediction models for important parameters such as CPU utilization and air conditioning temperature that affect temperature prediction, and uses the outputs of these two models as inputs for the server temperature prediction model to better fit the changes of feature values. The model combines the principle of thermal locality and integrates the temperature of upper and lower servers for joint modeling. Experiments show that our prediction model can accurately predict the inlet temperature evolution of the server with dynamic workload. RSME reaches 0.278 and the average prediction temperature difference is 0.633, which is much higher than the traditional model. In addition, this article also propose a minimum temperature difference scheduling algorithm based on temperature prediction model, which can effectively reduce the number of servers running at high temperature and low temperature in the data center, make the temperature of the data center more balanced and achieve better energy-saving compared with other baseline algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call