Abstract

In this paper, we aim to study networking problems from a whole new perspective by leveraging emerging deep learning, to develop an experience-driven approach, which enables a network or a protocol to learn the best way to control itself from its own experience (e.g., runtime statistics data), just as a human learns a skill. We present design, implementation and evaluation of a deep reinforcement learning (DRL)-based control framework, DRL-CC (DRL for Congestion Control), which realizes our experience-driven design philosophy on multi-path TCP (MPTCP) congestion control. DRL-CC utilizes a single (instead of multiple independent) agent to dynamically and jointly perform congestion control for all active MPTCP flows on an end host with the objective of maximizing the overall utility. The novelty of our design is to utilize a flexible recurrent neural network, LSTM, under a DRL framework for learning a representation for all active flows and dealing with their dynamics. Moreover, we, for the first time, integrate the above LSTM-based representation network into an actor-critic framework for continuous (congestion) control, which leverages the emerging deterministic policy gradient to train critic, actor, and LSTM networks in an end-to-end manner. We implemented DRL-CC based on the MPTCP implementation in the Linux kernel. The experimental results show that 1) DRL-CC consistently and significantly outperforms a few well-known MPTCP congestion control algorithms in terms of goodput without sacrificing fairness, 2) it is flexible and robust to highly-dynamic network environments with time-varying flows, and 3) it is friendly to regular TCP.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call