Abstract

In today’s data center networks (DCN), cloud applications commonly disseminate files from a single source to a group of receivers for service deployment, data replication, software upgrade, etc. For these group communication tasks, recent advantages of software-defined networking (SDN) provide bandwidth-efficient ways—they enable DCN to establish and control a large number of explicit multicast trees on demand. Yet, the benefits of data center multicast are severely limited, since there does not exist a scheme that could prioritize multicast transfers respecting the performance metrics wanted by today’s cloud applications, such as pursuing small mean completion times or meeting soft-time deadlines with high probability. To this end, we propose PAM (Priority-based Adaptive Multicast), a preemptive, decentralized, and ready-deployable rate control protocol for data center multicast. At the core, switches in PAM explicitly control the sending rates of concurrent multicast transfers based on their desired priorities and the available link bandwidth. With different policies of priority generation, PAM supports a range of scheduling goals. We not only prototype PAM upon the emerged P4-based programmable switch with novel approximation designs, but also evaluate its performance with ns3-based extensive simulations. Results imply that PAM is ready-deployable; it converges very fast, has negligible impacts on coexisting TCP traffic, and always performs near-optimal priority-based multicast scheduling.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call