Abstract

Experience-driven networking has emerged as a new and highly effective approach for resource allocation in complex communication networks. Deep Reinforcement Learning (DRL) has been shown to be a useful technique for enabling experience-driven networking. In this paper, we focus on a practical and fundamental problem for experience-driven networking: when network configurations are changed, how to train a new DRL agent to effectively and quickly adapt to the new environment. We present an Actor-Critic-based Transfer learning framework for the Traffic Engineering (TE) problem using policy distillation, which we call ACT-TE. ACT-TE effectively and quickly trains a new DRL agent to solve the TE problem in a new network environment, using both old knowledge (i.e., distilled from the existing agent) and new experience (i.e., newly collected samples). We implement ACT-TE in ns-3, and compare it with commonly-used baselines using packet-level simulations on three representative network topologies: NSFNET, ARPANET and random topology. The extensive simulation results show that 1) The existing well-trained DRL agents do not work well in new network environments; 2) ACT-TE significantly outperforms both two straightforward methods (training from scratch and fine-tuning based on an existing DRL agent) and several widely-used traditional methods in terms of network utility, throughput and delay.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.