Abstract

In data center networks (DCNs), the presence of long-lived TCP flows tends to bloat the switch buffers. As a consequence, short-lived TCP-incast traffic suffers repeated losses that often lead to loss recovery via timeout. Because the minimum retransmission timeout (minRTO) in most TCP implementations is fixed to around 200 ms, interactive applications that often generate short-lived incast traffic tend to suffer unnecessarily long delays waiting for the timeout to elapse. The best and most direct solution to such problem would be to customize the minRTO to match DCNs delays; however, this is not always possible; in particular in public data centers where multiple tenants, with various versions of TCP, co-exist. In this paper, we propose to achieve the same result by using techniques and technologies that are already available in most commodity switches and data centers and that do not interfere with the tenant’s virtual machines or TCP protocol. In this approach, we rely on the programmable nature of SDN switches and design a SDN-based incast congestion control (SICC) framework, that uses a SDN network application in the controller and a shim-layer in the host hypervisor, to mitigate incast congestion. We demonstrate the performance gains of the proposed scheme via real deployment in a small-scale testbed as well as ns2 simulation experiments in networks of various sizes and settings.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call