Abstract

A recent study found that IT contributes about 2% of global greenhouse gas emissions, in line with that of the aviation industry. Furthermore, it projected that that this share would double by the year 2020. Increasing environmental concern and regulatory action will soon force a paradigm shift in how IT solutions are designed and managed across their lifecycles. Data centers are a prominent and fast growing component of this impact.To address these concerns, we propose the development of a suite of technologies for a sustainable data center (SDC). The goal is to reduce the environmental footprint of a data center to such an extent that the services it offers are more environmentally friendly than conventional services offered within today's state-of-the-art facilities.Developing and demonstrating a SDC requires the multi-disciplinary collaboration of mechanical engineers, electrical engineers, computer scientists, and others. The compute infrastructure of the data center consists of thousands of servers hosting revenue-generating services, interconnected with each other and the outside world via networking equipment, and relying on storage devices for persistent data. The data center also has a power infrastructure that feeds electricity to all of the equipment, and a cooling infrastructure that removes heat from the equipment. The economic and environmental burden of the latter two infrastructures can often equal that of the compute infrastructure.All of these infrastructures, and many of their components, have traditionally been designed and managed independently, resulting in unnecessary redundancy and waste. For example, CRAC units are often provisioned in high availability (tier 4) data centers at twice the required capacity. This is to ensure sufficient backup cooling capacity in the event of failure. However, we have previously shown how to map the thermal zones of influence for each CRAC, and how to identify regions of the data center that naturally have high levels of cooling redundancy. Using such thermal zones, hardware can be provisioned to services based on availability and reliability requirements. For example, critical workloads can be placed within regions of the data center that are served by multiple thermal zones. The economic and environmental benefit of eliminating a redundant standby CRAC unit illustrates the type of advantage that can be garnered through integration of the compute and cooling infrastructures in the data center. Similarly integrating design and management of an entire data center, both within and across its compute, power, and cooling infrastructures, is crucial to improved operational efficiency and reduced environmental impact.We identify five principles for integrated design and management that cut across all three infrastructures, and indeed cut across the multiple disciplines that are needed to achieve this goal. This talk will describe each of these five principles and discuss how they can be employed to improve IT and data center sustainability.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call