Abstract

Software-defined networking creates new opportunities for automated network security management by providing a global network view and a standard interface for configuring network policies. Previously, we proposed a general framework, called ATMoS, for autonomous threat mitigation using reinforcement learning (RL) in software-defined networks. Using a suitable set of host simulations and based on observations from an arbitrary network monitoring infrastructure, ATMoS can autonomously mitigate threats by moving hosts between a set of virtual networks that embody different network policies. In this article, we propose ATMoS+, which extends the RL agent in ATMoS with a novel Deep Q-Network architecture. The deep RL agent in ATMoS+ leverages permutation-invariant and permutation-equivariant set functions to relax previous assumptions on the number of network hosts and their ordering. We showcase that the proposed deep RL agent is scalable and generalizes to an arbitrary-sized network without additional retraining, scales with the number of hosts, and accommodates several different types of threat alerts.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.