Abstract
The incorporation of advanced networking technologies makes modern systems vulnerable to cyber-attacks that can result in a number of harmful outcomes. Due to the increase of security incidents and massive activities on networks, existing works have mainly focused on designing <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Intrusion Detection Systems (IDSs)</i> based on traditional machine learning and deep learning models. In recent times, state of the art performance has been achieved in various fields through Deep Reinforcement Learning (DRL), which combines deep learning with reinforcement learning. In this paper, we propose a new DRL-based IDS for network traffics using <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Markov decision process (MDP)</i> to improve the IDS decision-making performance. In addition, an extensive analysis of the IDS behavior is provided through modeling the interaction between the well-behaving IDS and attacker players using <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Stochastic Game Theory</i> . Specifically, we used a non-zero-sum stochastic game, where the transitions between states depend on both the IDS and the attacker’s actions at each stage of the game. We show that our game reaches a Nash Equilibrium upon convergence to seek the optimal solution, which corresponds to the optimal decision policy where both players maximize their profits. We compared the performance of our proposed DRL-IDS to the baseline benchmark of standard reinforcement learning (RL) and several machine learning algorithms using NSL-KDD dataset. As a result, our proposed DRL-IDS outperforms the existing models by improving both the detection rate and the accuracy while reducing false alarms. Results were provided to demonstrate the convergence of the game theory-based IDS under various settings toward equilibrium. This equilibrium corresponds to the safe state where both players are playing their respective best strategies.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.