Abstract

The growth of cloud-based services is mainly supported by the core networking infrastructures of large-scale data centers, while the scalability of these services is influenced by the performance and dependability characteristics of data centers. Hence, the data center network must be agile and reconfigurable in order to respond quickly to the ever-changing application demands and service requirements. The network must also be able to interconnect the big number of nodes, and provide an efficient and fault-tolerant routing service to upper-layer applications. In response to these challenges, the research community began exploring novel interconnect topologies, namely: Flecube, DCell, Ficonn, HyperFlaNet and BCube. However, these topologies either scale too fast (grows exponentially in size), or too slow, and therefore suffer from performance bottlenecks. In this paper, we propose a novel data center topology called LaScaDa (Layered Scalable Data Center) as a new solution for building scalable and cost-effective data center networking infrastructures. The proposed topology organizes nodes in clusters of similar structure, then interconnect these clusters in a well-crafted pattern and system of coordinates for nodes to reduce the number of redundant connections between clusters, while maximizing connectivity. LaScaDa forwards packets between nodes using a new hierarchical row-based routing algorithm. The algorithm constructs the route to the source based on the modular difference between the source and destination coordinates. Furthermore, the proposed topology interconnects a large number of nodes using a small node degree. This strategy increases the number of directly connected clusters and avoids redundant connections. As a result, we get a good quality of nodes in terms of average path length (APL), bisection bandwidth, and aggregated bottleneck throughput. Experimental results show that LaScaDa has better performance than DCell, BCube, and HyperBcube in terms of scalability, while providing a good quality of service.

Highlights

  • M ASSIVE data centers are being built around the world to provide various cloud computing services such as Manuscript received January 22, 2017; revised January 30, 2018, August 17, 2018, and February 17, 2020; accepted June 9, 2020; approved by IEEE/ACM TRANSACTIONS ON NETWORKING Editor K

  • Data centers networking is traditionally built around Top of Rack (ToR) switches interconnected through End of Rack (EoR) switches, which are in turn connected through core switches

  • LaScaDa can substantially increase the network capacity as it has the largest value of bisection bandwidth compared to other topologies

Read more

Summary

INTRODUCTION

M ASSIVE data centers are being built around the world to provide various cloud computing services such as Manuscript received January 22, 2017; revised January 30, 2018, August 17, 2018, and February 17, 2020; accepted June 9, 2020; approved by IEEE/ACM TRANSACTIONS ON NETWORKING Editor K. This approach is very costly, while leading to significant bandwidth oversubscription towards the network core All these issues encouraged several researchers to propose new topologies for scalable and cost-effective network infrastructures, namely: FatTree [6], FiConn [7], DCell [8], BCube [9], and HyperBcube [10]. We propose a novel interconnection network topology called LaScaDa (Layered Scalable Data Center) that scales faster than HyperBcube, BCube, Flecube, and DCell, 1In this document we will use the words “node” and “server” interchangeably. The followings are some of the contributions proposed in this paper: 1) A novel data center topology called LaScaDa, capable of scaling the entire network to millions of nodes using nodes with small degrees and small port count switches.

RELATED WORK
The Switch-Centric Category
The Server-Centric Category
LASCADA TOPOLOGY
Some Definitions
Physical Structure
Illustrating the Execution of LCM Algorithm
LASCADA KEY FEATURES
Diameter
Scalability and Physical Cost
Bandwidth
Bisection Bandwidth
Average Path Length
Aggregate Bottleneck Throughput
Incremental Expansion
Fault Free Routing Scheme
Fault-Tolerant Routing Scheme
EXPERIMENTAL RESULTS
DISCUSSION
VIII. CONCLUSIONS
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call