Abstract

Data center networks may comprise tens or hundreds of thousands of nodes, and, naturally, suffer from frequent software and hardware failures as well as link congestions. Packets are routed along the shortest paths with sufficient resources to facilitate efficient network utilization and minimize delays. In such dynamic networks, links frequently fail or get congested, making the recalculation of the shortest paths a computationally intensive problem. Various routing protocols were proposed to overcome this problem by focusing on network utilization rather than speed. Surprisingly, the design of fast shortest-path algorithms for data centers was largely neglected, though they are universal components of routing protocols. Moreover, parallelization techniques were mostly deployed for random network topologies, and not for regular topologies that are often found in data centers. The aim of this paper is to improve scalability and reduce the time required for the shortest-path calculation in data center networks by parallelization on general-purpose hardware. We propose a novel algorithm that parallelizes edge relaxations as a faster and more scalable solution for popular data center topologies.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.