Abstract
Network Intrusion Detection Systems (IDS) have achieved high accuracy by widely applying Machine Learning (ML) models. However, most current ML-based IDSs can not cope with targeted attacks from adversaries because they are commonly trained and tested using fixed data sets. In this paper, we propose an Adversarial Intrusion Detection Training Framework (AIDTF) to improve the robustness of IDSs, which consists of an attacker model a-model, a defender model d-model, and a black-box trainer t-module. Both the a-model and d-model are multilayer perceptrons, and the t-module is the module used to train IDSs. AIDTF improves the accuracy of IDS by using an adversarial training method, which is different from traditional training methods. Taking the distribution of normal samples in the dataset as the distribution that the a-model and d-model need to learn, the goal of the a-model is to generate samples that deceive the d-model, while the goal of the d-model is to determine whether the input samples are real samples, so there is an adversarial relationship between a-model and d-model. Different types of IDSs can be trained by the t-module using the samples generated from the confrontation between the a-model and the d-model, and we call this kind of IDS the Adversarial Training Intrusion Detection System (ATIDS). The main contribution of this paper is to propose a training method that is used to obtain an IDS with high accuracy not only for known test sets but also to identify unknown disguised attack samples. We tested different types of ATIDSs using the current mainstream attack methods, which include the Fast Gradient Method, Fast Gradient Sign Method, Projected Gradient Descent, and Jacobs Saliency Map Algorithm. The experimental results prove that AIDTF outperforms other adversarial training methods with not only higher accuracy for the test set but also up to a 99% recognition rate for the attack samples.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.