The Internet has existed since the 1970s as a means of data exchange between network devices in small networks. In the early stage, there was a small number of devices, but today there is an ever-increasing number of devices, leading to congestion in the network. Therefore, congestion control has attracted so much attention in the academic community and the industry for the past 30 years. Recently, Google has developed BBR (Bottleneck Bandwidth and Round-Trip Time), a rate-based congestion control algorithm. BBR controls transmission rates based on delivery rate and round-trip time (RTT). However, such a static congestion control algorithm (e.g., BBR, etc.) cannot achieve high performance in various network conditions (e.g., low bandwidth, etc.). Concretely, these static algorithms cannot adapt to the dynamic changes of the network environment. Therefore, in this paper, we propose an adaptive algorithm (called ABBR) for congestion control in next-generation networks. ABBR takes into account the reinforcement learning algorithm to learn relevant policies to change the transmission rate corresponding to each congestion control algorithm to optimize long-term performance. The experimental results show that our proposal can achieve good performance in terms of throughput, RTT, and fairness compared to the benchmarks.
Read full abstract