Abstract

Recent advancements in deep reinforcement learning (RL) have opened new avenues for enhancing network congestion control algorithms. Our research builds upon these developments, particularly focusing on the BBR (Bottleneck Bandwidth and Round-trip propagation time) congestion control algorithm. We propose integrating GENET's reinforcement learning framework, a novel training paradigm that has demonstrated success in various network adaptation algorithms, including adaptive video streaming, congestion control, and load balancing. GENET leverages curriculum learning to effectively train RL models by progressively introducing more challenging network environments. This method counters the common pitfalls in RL training, such as suboptimal performance in a wide range of environments and poor generalization in narrowly defined training scenarios. Our approach exploits the strengths of GENET in identifying and emphasizing network conditions where the current RL model underperforms compared to traditional rule-based baselines, thereby facilitating significant improvements. This research aims to demonstrate that applying GENET's methodology to the BBR congestion control algorithm can yield RL policies that surpass both regularly trained RL policies and conventional baselines, thereby advancing the efficiency and reliability of network congestion control.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call