Abstract

Minimizing the energy consumption is a dominant problem in data center design and operation. To cope with this issue, the common approach is to optimize the data center layout and the workload distribution among servers. Previous works have mainly adopted the temperature at the server inlet as the optimization constraint. However, the inlet temperature does not properly characterize the server’s thermal state. In this paper, a chip temperature-based workload allocation strategy (CTWA-MTP) is proposed to reduce the holistic power consumption in data centers. Our method adopts an abstract heat-flow model to describe the thermal environment in data centers and uses a thermal resistance model to describe the convective heat transfer of the server. The core optimizes the workload allocation with respect to the chip temperature threshold. In addition, the temperature-dependent leakage power of the server has been considered in our model. The proposed method is described as a constrained nonlinear optimization problem to find the optimal solution by a genetic algorithm (GA). We applied the method to a sample data center constructed with computational fluid dynamics (CFD) software. By comparing the simulation results with other different workload allocation strategies, the proposed method prevents the servers from overcooling and achieves a substantial energy saving by optimizing the workload allocation in an air-cooled data center.

Highlights

  • Numerous trends in the information technology (IT) industry show an increasing energy consumption of data centers’ operation during the past decade [1]

  • We assumed that the inlet temperature reached the upper limit (Tin = 27 ◦ C) of the guidelines provided by American Society of Heating Refrigeration and Air-conditioning (ASHRAE) [2] and that the chip temperature reached the threshold of our optimization problem (Tchip = 80 ◦ C) when the server was running in the busy state (u = 100%); the thermal resistance was set to Ri = 0.0147 K/W according to Equation (14)

  • We observe that the total power of modified uniform task (MUT) was higher than for CTWA-MTP and was lower than for MPIT-TA; this implies that the performance of the workload allocation methods based on the chip temperature, such as MUT and the CTWA-MTP, is better than that of the workload allocation methods based on the inlet temperature

Read more

Summary

Introduction

Numerous trends in the information technology (IT) industry show an increasing energy consumption of data centers’ operation during the past decade [1]. In order to enhance the energy efficiency in data centers, many existing works focus on optimizing the layout of the data center or minimizing the effect of heat recirculation by placing the workload intelligently. These methods have adopted the inlet temperature to describe the thermal environment of the server and have achieved some effects in terms of energy saving. Our method achieved an optimal workload allocation scheme that prevented the servers from overheating or overcooling, and a significant amount of cooling energy was saved without degrading the server’s thermal reliability.

Related Works
Strategy for Minimizing Holistic Power Consumption of Data Centers
Server Power Model
Abstract Heat-Flow Model
Equipment Thermal Resistance Model
Total Power Consumption of Data Center
Problem Statement and GA Optimization
Simulation and Parameter Setup
Evaluation of Total Power Consumption
Evaluation of Chip Temperature and Inlet Temperature
Evaluation of Workload Allocation
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call