Abstract

Congestion control algorithms (CCAs) are the fundamental building block of TCP protocol. As one of the newest CCAs, TCP BBR is designed to operate around Kleinrock's optimal point, i.e. maximum bandwidth and minimum delay, and is seeing increased adoption in today's Internet. However, BBR may send packets at a higher rate than the actual bandwidth due to bandwidth overestimation, especially under time-varying environments, resulting in large queueing delay. In this paper, we propose an adaptive BBR pacing algorithm, namely ABBR, for achieving high throughput and low delay simultaneously. ABBR uses deep reinforcement learning (DRL) techniques to train a highly performant agent through trial-and-error to infer the packet sending rate. The design of ABBR is deeply rooted in BBR domain knowledge in terms of data collection and decision-making. ABBR is implemented in Linux kernel and is backward compatible with vanilla BBR. Extensive experiments show that ABBR can reduce the delay by 40% with about 3% throughput loss on average compared with BBR. The best throughput-delay-overhead tradeoff is achieved in ABBR compared with the state-of-the-art CCAs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call