Abstract
Large online service providers (OSPs) often build private backbone networks to interconnect data centers in multiple locations. These data centers house numerous applications that produce multiple classes of traffic with diverse performance objectives. Applications in the same class may also have differences in relative importance to the OSP's core business. By controlling both the hosts and the routers, an OSP can perform both application rate-control and network routing. However, centralized management of both rates and routes does not scale due to excessive message-passing between the hosts, routers, and management systems. Similarly, fully-distributed approaches do not scale and converge slowly. To overcome these issues, we investigate two semi-centralized designs that lie at practical points along the spectrum between fully-distributed and fully-centralized solutions. We achieve scalability by distributing computation across multiple tiers of an optimization machinery. Our first design uses two tiers, representing the backbone and classes, to compute class-level link bandwidths and application sending rates. Our second design has an additional tier representing individual data centers. Using optimization, we show that both designs provably maximize the aggregate utility over all traffic classes. Simulations on realistic backbones show that the 3-tier design is more scalable, but converges slower than the 2-tier design.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.