Abstract

Thermal-Aware (TA) scheduling is a primitive thermal-management tool to avoid hotspots and attain a better thermal profile inside data centers. However, TA scheduling depends on the accuracy of server temperature calculation for making reliable scheduling decisions. Existing TA simulation frameworks rely on the CRAC, RC, and thermodynamics model to calculate the temperature of a server. However, these models are not very efficient computationally while calculating the temperature of computing nodes. Hence, there is a need for an efficient and lightweight temperature prediction model to fill this gap. Moreover, existing TA allocation and migration schemes neglect the workload heterogeneity regarding execution time (length) of user tasks in batch workloads. Ignoring task heterogeneity may lead to higher ambient temperature on its neighboring servers—assigning a hotter job with a longer duration to a server with higher ambient effect results in significantly higher cooling expenses.Our contribution in this paper is twofold: (1) firstly, we design and train a deep neural network to predict the temperature of servers; our proposed DNN model achieves 96.11% prediction accuracy, and (2) secondly, we propose a TA algorithm for job allocation and migration. Specifically, we consider the length and thermal profile of user tasks and servers during allocation and migration. We compare the proposed strategy against existing TA scheduling designs such as TA Scheduling Algorithm (TASA) and TA Control Strategy (TACS) using simulations. Results demonstrate that the proposed TA approach significantly reduces the overall energy consumption by up to 12.03% and 8.28% compared to the TASA and TACS, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call