Abstract

Cost minimization is a major concern in data center networks (DCNs). Existing DCNs generally adopt Clos network with crossbar middle switches to achieve non-blocking data switching among the servers, and the number of middle switches is proportional to the number of ports of the aggregation switches in a fixed manner. Besides, reconfiguration overhead of the switches is generally ignored, which may contradict the engineering practice. In this paper, we consider batch scheduling based packet switching in DCNs with reconfiguration overhead at each middle switch, which inevitably leads to packet delay. With existing state-of-the-art traffic matrix decomposition algorithms, we can generate a set of permutations, each of which stands for the configuration of a middle switch. By reconfiguring each middle switch to fulfill multiple configurations in parallel with others, we reveal that a tradeoff exists between packet delay and switch cost (denoted by the number of middle switches), while performance guaranteed switching with bounded packet delay can be achieved without any packet loss. Based on the tradeoff, we can minimize the number of middle switches (under a given packet delay bound) and an overall cost metric (by translating delay into a comparable cost factor), as well as formulating criteria for choosing a proper matrix decomposition algorithm. This provides a flexible way to reduce the number of middle switches by slightly enlarging the packet delay bound.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.