Abstract

Intrusion Detection Systems (IDS) are increasingly adopting machine learning (ML)-based approaches to detect threats in computer networks due to their ability to learn underlying threat patterns/features. However, ML-based models are susceptible to adversarial attacks, attacks wherein slight perturbations of the input features, cause misclassifications. We propose a method that uses active learning and generative adversarial networks to evaluate the threat of adversarial attacks on ML-based IDS. Existing adversarial attack methods require a large amount of training data or assume knowledge of the IDS model itself (e.g., loss function), which may not be possible in real-world settings. Our method overcomes these limitations by demonstrating the ability to compromise an IDS using limited training data and assuming no prior knowledge of the IDS model other than its binary classification (i.e., benign or malicious). Experimental results demonstrate the ability of our proposed model to achieve a 98.86% success rate in bypassing the IDS model using only 25 labeled data points during model training. The knowledge gained by compromising the ML-based IDS, can be integrated into the IDS in order to enhance its robustness against similar ML-based adversarial attacks.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.