Major innovations in computing have been driven by scaling up computing infrastructure, while aggressively optimizing operating costs. The result is a network of worldwide datacenters that consume a large amount of energy, mostly in an energy-efficient manner. Since the electric grid powering these datacenters provided a simple and opaque abstraction of an unlimited and reliable power supply, the computing industry remained largely oblivious to the carbon intensity of the electricity it uses. Much like the rest of the society, it generally treated the carbon intensity of the electricity as constant, which was mostly true for a fossil fuel-driven grid. As a result, the cost-driven objective of increasing energy-efficiency --- by doing more work per unit of energy --- has generally been viewed as the most carbon-efficient approach. However, as the electric grid is increasingly powered by clean energy and is exposing its time-varying carbon intensity, the most energy-efficient operation is no longer necessarily the most carbon-efficient operation. However, the recent focus on exploiting the flexibility of computing's workloads---along temporal, spatial, and resource dimensions---to reduce carbon emissions, comes at the cost of either performance or energy efficiency. In this paper, we quantify the trade-offs between energy efficiency and carbon efficiency in exploiting computing's flexibility and show that blindly optimizing for energy efficiency is not always the right approach. 1
Read full abstract