Abstract

The development of machine learning provides a new paradigm for network optimizations, e.g., reinforcement learning (RL) has brought great improvements in many fields, such as adaptive video streaming, congestion control of TCP. The fundamental mechanism of such RL-based architectures is that the neural network decision model converges to a stable state by continuously interacting with network environment. However, for network routing problem, such RL-based strategies do not work well due to topology change. This is because topological changes would require the existing RL models to be retrained, while these models may stop making routing decisions or provide non-optimal decisions during the slow reconverging process of retraining, seriously affected transmission performance. To solve this problem, we proposed a fast convergent RL-model (SOHOFL), which can alleviate the performance degradation caused by the slow retraining process by federated learning. The experimental results based on real-world network topologies demonstrate that SOHO-FL outperforms the state-of-the-art algorithms in reconvergence time by 22.3% on average.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.