Abstract

Today, data center networks (DCNs) are built using multi-tier architecture. These large-scale networks face many challenges, such as security, delay, low throughput, loops, link oversubscription, TCP Incast and Outcast, etc. In this paper, a TCAM (Ternary Content Addressable Memory) based routing technique is proposed, augmenting the routing capabilities of multi-tier architectures in large scale networks. The routing complexities in these architectures are rectified and improved by implementing an additional TCAM based routing table in Leaf/ToR switches for a specific number of compute nodes in particular Pods, and it is scalable to whole datacenter nodes. To test the model, we implemented two prototype models: one depicting our proposed TCAM based switch and the other is a typical Top-of-the-Rack (ToR) switch and compared the performance of the proposed model and if any overhead introduced in it. The preliminary results show that our TCAM based routing table technique is fast and it forwards the network packets at line-rate, does not introduce considerable latency, on-chip resources power consumptions is less than 3%, and helps to solve or mitigate the above critical problems that are present in the current large DC's three-tier architecture, especially in Top of the Rack and aggregation layers switches.

Highlights

  • Datacenter (DC) is a large, dedicated cluster of compute servers and switching devices, that is built to deliver online services to their customers

  • We propose that an additional Ternary Content Addressable Memory (TCAM) table may be implemented in Top of Rack (ToR)/leaf switches, containing flow entries to connect the servers linked to the same leaf/ToR switch and the intra-Pod and inter-Pod servers

  • The study proposes that an additional TCAM table may be implemented in ToR/leaf switches containing flow entries to connect the servers linked to the same leaf/ToR switch and the servers in other particular Pods

Read more

Summary

INTRODUCTION

Datacenter (DC) is a large, dedicated cluster of compute servers and switching devices, that is built to deliver online services to their customers. Are building huge data center networks, but large universities and private enterprises are shifting towards deploying hundreds to thousands of compute nodes and routing devices to support their extended services like web servers, E-mail and online libraries etc. These data centers need high throughput and low latency packet processing, by mitigating the problems which cause them to abandon these goals. As the technologies evolved, there are certain anomalies and side effects have arisen in the system Today, these large datacenter networks face the challenges of Performance, Delay, Link Oversubscription, Loops, TCP Incast/Outcast, etc.

BACKGROUND
Findings
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call