Abstract

Traffic policing is widely used by ISPs to limit their customers’ traffic rates. It has long been believed that a well-tuned traffic policer offers a satisfactory performance for TCP. However, we find this belief breaks with the emergence of new congestion control (CC) algorithms: flows using new CC algorithms can easily occupy the majority of bandwidth, starving traditional TCP flows. We confirm this problem with experiments and reveal its root cause as follows. Without a buffer in traffic policers, congestion only causes packet losses, while new CC algorithms are loss-resilient. When being policed, they will not reduce the sending rate until an unacceptable loss ratio for TCP is reached, resulting in low throughput for competing TCP flows. Simply adding a buffer to the traffic policer improves fairness but incurs high latency. To this end, we propose FairPolicer, which can achieve fair bandwidth allocation without sacrificing latency. FairPolicer regards a token as a basic unit of bandwidth and fairly allocates tokens to active flows in a round-robin manner. To avoid bandwidth waste when flows come and go, FairPolicer puts all available tokens in a global bucket and maintains the amount of residual bucket space rather than the number of available tokens. To scale to massive concurrent flows, FairPolicer uses a Count-Min Sketch structure to maintain per-flow data with a small memory footprint. Testbed experiments show that FairPolicer can allocate bandwidth in a max-min fair manner and achieve much lower latency than other kinds of rate limiters.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call