Abstract

In this paper, we aim to propose a novel transmission control protocol (TCP) congestion control method from a cross-layer-based perspective and present a deep reinforcement learning (DRL)-driven method called DRL-3R (DRL for congestion control with Radio access network information and Reward Redistribution) so as to learn the TCP congestion control policy in a superior manner. In particular, we incorporate the RAN information to timely grasp the dynamics of RAN, and empower DRL to learn from the delayed RAN information feedback potentially induced by several consecutive actions. Meanwhile, we relax the implicit assumption [that the feedback to one specific action returns at a round-trip-time (RTT) after the action is applied] in previous researches, by redistributing the rewards and evaluating the merits of actions more accurately. Experiment results show that besides maintaining a reasonable fairness, DRL-3R significantly outperforms classical congestion control methods (e.g., TCP Reno, Westwood, Cubic, BBR and DRL-CC) on network utility by achieving a higher throughput while reducing delay in various network environments.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.