Abstract

The data center network connecting the servers in a data center plays a crucial role in orchestrating the infrastructure to deliver peak performance to users. In order to meet high performance and reliability requirements, the data center network is usually constructed of a massive number of network devices and links to achieve 1:1 oversubscription for peak workload. However, traffic rarely ever hits the peak capacity in practice and the links are underutilized most of the time, which results in an enormous waste of energy. Therefore, aiming to achieve an energy proportional data center network without compromising throughput and fault tolerance too much, in this paper we propose two efficient schemes from the perspective of resource allocation, routing and flow scheduling. We mathematically formulate the energy optimization problem as a multi-commodity minimum cost flow problem, and prove its NP-hardness. Then we propose a heuristic solution with high computational efficiency by applying an AI resource abstraction technique. Additionally, we design a practical topology-based solution with the benefit of Random Packet Spraying consistent with multipath routing protocols. Both simulations and theoretical analysis have been conducted to demonstrate the feasibility and convincing performance of our frameworks.

Highlights

  • The data center, as a centralized repository clustering a large number of servers, has become home to essential large-scale computation, storage and Internet-based applications which provide various services like search, social networking, e-mails, gaming, cloud computing, and so on [1,2]

  • Based on the above observations, this paper aims to achieve a bandwidth guaranteed energy proportional data center network, where the amount of power consumed by the network is proportional to the actual traffic workload

  • Energy-aware heuristic schemes we propose two heuristic solutions to the energy optimization problem formulated in Section “Problem statement”, one of which is based on Blocking Island Paradigm while the other one is topology based

Read more

Summary

Introduction

The data center, as a centralized repository clustering a large number of servers, has become home to essential large-scale computation, storage and Internet-based applications which provide various services like search, social networking, e-mails, gaming, cloud computing, and so on [1,2]. The research [4,5] shows that, in practice the average link utilization in different data centers ranges only between 5% and 25% and varies largely between daytime and night. This reveals that most network devices and links stay idle or underutilized most of the time, but an idle device consumes up to 90% of the power consumed at full loads [6], which leads to a great waste of energy. As illustrated in [4], suppose the servers are totally energy-proportional, when the data center is 15% utilized (servers and network), the network will consume up to 50% of overall power. Today’s commodity network devices are not energy proportional, mainly because the components of the network devices (such as transceivers, line cards, fans, etc) are always kept on regardless of whether they have data packets to transfer or not, leading to a significant energy wastage

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call