Abstract

The novelty of the software-defined network(SDN) is to separate the control plane and the data plane for easier manipulation of the network. The distributed control plane is designed to achieve more powerful computation capacity and address the single-point failure problem. However, it also poses a new challenge that how to arrange switch-controller associations effectively. The direct static configuration cannot adapt the time-varying requests from switches well, and it would result in imbalance problems on the control plane and cause long-tail latency. Thus, it is necessary to take proper actions to adjust the switch-to-controller association dynamically. Existing controller-based load balancing methods need to communicate with the switches frequently and incur not only high assumptions of the rare control channel of SDN but also high computation costs. In this paper, we provide a switch-based solution that puts Reinforcement Learning agents on all Switches (RLoS). Instead of setting static rules predefined by operators, RLoS makes each switch actively select the best controller. RLoS treats every switch as an independent agent with its own neural network and parameters. With the carefully designed training algorithm, the agents could choose their preferable controllers via their local information. The results show that even with partial observation, the RLoS still can achieve considerable improvement in the load balance among all controllers compared with those controller-based association benchmarks. Our RLoS could decrease the maximum response latency among controllers by about 5%∼15% under different scenarios on average.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call