Abstract

Datacentres are experiencing a tremendous increase in the amount of network traffic due to cloud services and many other emerging applications. It is expected that the required bandwidth for an exascale supercomputer node will grow 20 times every 4 years (i.e., 20Pb/s in 2016 and 400Pb/s in 2020) [1]. However, the total power consumption that can be afforded by one datacentre is only allowed to increase at a much lower rate (i.e., 2 times every 4 years, 10MW in 2016 and 20MW in 2020) due to thermal dissipation constraints [1]. The consumers of energy in datacentres are IT equipment (e.g. servers, network equipment, etc.) and other supporting facilities (e.g. lighting, cooling, etc.). In order to identify how efficiently a datacentre uses its power, a measure called power usage effectiveness (PUE) is defined as the ratio of the total facility power to the IT equipment power. Many efforts have been put on reducing PUE. For instance, a smart selection of datacentre location could greatly reduce the energy required for cooling and significantly improve PUE. It was very recently reported that Facebook carefully chose the location and launched Arctic datacentre (consisting of three 28,000 square-meter buildings) in Sweden [2]. By utilizing icy conditions in the Arctic Circle, their datacentre can reach a PUE around 1.07 [2]. Such a low level of PUE implies that in modern datacentres major focus on energy savings should be moved to IT equipment. Currently, network equipment in a datacentre may take up to approximately 20% of the total energy consumed by IT equipment and this value is expected to grow in the future [3]. Thus, in order to sustainably handle ever-increasing traffic demand it becomes of extreme importance to address the energy consumption issue in datacentre networks (DCNs), which provide interconnections among different servers within a datacentre as well as interfaces to the Internet. Typically, DCNs include several tiers, namely edge, aggregation and core. In order to reduce energy consumption and increase bandwidth, the majority of the research efforts so far have been focusing on optical interconnects (e.g., reviewed in [1]) for core/aggregation tiers (i.e., switching among different racks). It should be noted that the power consumed by the switches at the edge tier (i.e., at the top of the rack ToR), which interconnect the servers located in the same rack, is dominant (up to 90% of the total power consumed by all types of switches in DCNs [4]) because of a huge number of ToR switches. Therefore, improving energy-efficiency at the level of ToR should be considered in the first place in order to decrease the overall DCN power consumption. Optical interconnects can be used to increase the energy efficiency at the ToR. On the other hand, to design proper optical interconnects at ToR it is of high importance to understand the characteristic of the traffic generated by the servers and its specific requirements. A number of key points (such as traffic locality, multicast capability, variable flow capacity, burstiness, etc.) that are summarized from the literature [5-7] should be taken into account. However, there have been very few studies on this problem. Therefore, we would like to discuss and compare several possibilities based on optical solutions, which can offer dynamic and flexible optical connections for datacentre traffic within the rack.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call