Abstract

With the growing impact of the Internet, computer network communication has become an essential component for various industries. Congestion Control (CC) algorithms serve as the backbone of network communication, significantly affecting network quality. However, designing a CC algorithm that performs optimally across diverse network environments presents substantial challenges. The task of redesigning and optimizing CC algorithms for specific network environments demands both expert experience and a substantial workforce. In this paper, we proposed an inverse reinforcement learning (IRL) algorithm that can use expert data to guide the self-optimization of the CC model in specific network environments. To enhance model training efficiency, we propose a parallel training framework and a visualization analysis tool, enabling distributed training and real-time control level analysis. In the experiments, we assess the performance of 16 algorithms across 3 network scenarios using Pantheon. Our IRL model achieves the optimal level of network performance in satellite network, enhancing throughput by 10%–23%. For delay performance, it ranks 2nd in wired network and achieves a 21%–67% improvement over traditional TCP algorithms in regular network.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call