Abstract

Software-defined networking creates new opportunities for automated network security management by providing a global network view and a standard interface for configuring network policies. Previously, we proposed a general framework, called ATMoS, for autonomous threat mitigation using reinforcement learning (RL) in software-defined networks. Using a suitable set of host simulations and based on observations from an arbitrary network monitoring infrastructure, ATMoS can autonomously mitigate threats by moving hosts between a set of virtual networks that embody different network policies. In this article, we propose ATMoS+, which extends the RL agent in ATMoS with a novel Deep Q-Network architecture. The deep RL agent in ATMoS+ leverages permutation-invariant and permutation-equivariant set functions to relax previous assumptions on the number of network hosts and their ordering. We showcase that the proposed deep RL agent is scalable and generalizes to an arbitrary-sized network without additional retraining, scales with the number of hosts, and accommodates several different types of threat alerts.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call