Abstract

As high-performance computing and data storage transition toward becoming Internet-based services, the world has witnessed an ever-increasing demand for both size and capacity of data centers. The growth of cloud-based services and applications shows no sign of slowing down, with additional custom-hardware for machine learning algorithms beginning to be deployed at scale in dedicated data centers. Today's data centers accommodate many pieces of information technology (IT) equipment such as data-processing units, data storage units, and communication devices. A recent report estimated the energy usage of data centers in the United States (US) alone at 70 billion kW h in 2014, corresponding to 1.8% of the total electric energy consumed in the country [1]. Since the IT equipment requires low DC voltage (typically ranging from a few volts to a few dozen volts) to operate, various power delivery architectures are established to provide low DC voltage from utility and renewable resources. In this case, the power delivery infrastructure in data centers can be considered as a microgrid due to the high installed power capacity and dynamic loads. However, data centers are also quite different than typical DC microgrids in many regards, both in the characteristics of the loads (extraordinarily rapid transients, but all controlled/managed from a central load scheduling interface) and the extreme up-time requirements. This chapter addresses major aspects of power-delivery architectures in data centers such as efficiency, reliability, integration with renewable resources, and protection. A critical evaluation of the technical and commercial barriers to widespread DC power distribution in data centers will be performed, along with a few examples of existing DC data centers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call