Abstract

As cloud computing continues to gain significance across fields, the energy consumption of datacentres creates new challenges in the design and operation of computer systems, with cooling remaining a key part of the total energy expenditure. We investigate the implications of increasing the room temperature setpoint in datacentres to save energy. For this, we develop a holistic model for the energy consumption of the server room that depends on user workload and service level agreement constraints, and that considers both cooling and computing energy dissipation. The model is applicable to a steady-state analysis of the system and brings insight into the impact of the most relevant parameters that affect the net energy consumption, such as the outside temperature, room temperature setpoint, and user demand. We analyse both static and dynamic server provisioning cases. In the latter case, a global power management scheme determines the optimal number of servers required to handle the incoming user demand to fulfil a target service level objective. Finally, we consider the extra energy needed to maintain service continuity under the expected higher server mortality rate due to warmer operational temperatures. Energy and temperature measurements acquired from a server machine running scientific benchmark programs allow to realistically fix model parameters for the study and to obtain pragmatic conclusions.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.