Abstract

Controller placement is a critical issue in Software-Defined Networking, which has been identified as a potential approach to achieve more flexible network control and management. Deep Reinforcement Learning holds immense promise in achieving favorable outcomes through its ability to explore solution spaces and adapt to swiftly changing data streams. Furthermore, the advancement of reinforcement learning algorithms, like chess or Go, has inspired the concept of treating controller placement problem as a Markov game. These algorithms are employed to train intelligent agents capable of autonomously resolving this problem. This paper presents an intelligent system capable of optimizing the placement of controllers in Software-Defined Networking. The Muzero Reinforcement learning algorithm is used to train a model via self-competition which, by combining a tree search with a learned model, achieves superhuman performance in complex domains. Once trained, the model is integrated with an Software-Defined Networking controller, so that it is able to find the optimal placement of the game in a real network by incorporating performance metrics including latency, load, and overhead communication into the training process of intelligent agents. In order to show that our approach is feasible and efficient in practice, experiment results are provided as a benchmark.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call