Abstract
The control plane plays a significant role in Software-Defined Networking (SDN). A large SDN usually implements its control plane with several distributed controllers, each controlling a subset of switches and synchronizing with other controllers to maintain a consistent network view. Under the fluctuating network traffic, a static controller-switch mapping relationship could lead to imbalanced workload allocation. Controllers may getoverloaded and reject new requests, eventually reducing the control plane's request processing ability. Most existing schemes have relied heavily on iterative optimization algorithms to manipulate the mapping relationship between controllers and switches, which are either time-consuming or less satisfactory in terms of performance. In this paper, we propose a dynamic controller workload balancing scheme, that is termed MARVEL, based on multi-agent reinforcement learning for generation of switch migration actions. MARVEL works in two phases: offline training and online decision making. In the training phase, each agent learns how to migrate switches through interacting with the network. In the online phase, MARVEL is deployed to make decisions on migrating switches. Experimental results show that MARVEL outperforms competing existing schemes by improving the control plane's request processing ability at least 27.3% while using 25% less processing time.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.