Abstract

Most information technology (IT) equipment found in a data center is air-cooled as electrical component produces heat, which must be removed to prevent the temperature of the IT equipment from rising to an unacceptable level. The energy consumption for the data center cooling system is positively related to the air temperature outside the data center. The difference of data center internal temperature and the outside air temperature varies from each data center location. If we reschedule the workload of Internet cloud services to the least temperature difference, the cooling energy consumption will be the biggest savings. The cooling energy-consumption model and query characteristics of cloud services provide the methodology to formulate the energy consumption and workload rescheduling. However, the cloud service must meet the tail latency constraint after the rescheduling. We solve this problem by estimating the high-percentile tail latency and scheduling the cloud service to where can meet the tail latency constraint. At last, a proactive weather-aware geo-scheduling algorithm, called EC3, is proposed to distribute end-users’ loads among data centers so as to reduce the cooling energy consumption. The trace-driven experiments on real clouds and data center workload traces show the effectiveness of our design for reducing data center cooling consumption.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call