Cloud computing not only requires high-capacity data center networks to accelerate bandwidth-hungry computations, but also causes considerable power expenses to cloud providers. In recent years many advanced data center network architectures have been proposed to increase the network throughput, such as Fat-Tree [1] and BCube [2] , but little attention has been paid to the power efficiency of these network architectures. This paper makes the first comprehensive comparison study for typical data center networks with regard to their Network Power Effectiveness (NPE ), which indicates the end-to-end bps per watt in data transmission and reflects the tradeoff between power consumption and network throughput. We take switches, server NICs and server CPU cores into account when evaluating the network power consumption. We measure NPE under both regular routing and power-aware routing, and investigate the impacts of topology size, traffic load, throughput threshold in power-aware routing, network power parameter as well as traffic pattern. The results show that in most cases Flattened Butterfly possesses the highest NPE among the architectures under study, and server-centric architectures usually have higher NPEs than Fat-Tree and VL2 architectures. In addition, the sleep-on-idle technique and power-aware routing can significantly improve the NPEs for all the data center architectures, especially when the traffic load is low. We believe that the results are useful for cloud providers, when they design/upgrade data center networks or employ network power management.
Read full abstract