Abstract

Adversarial machine learning is a recent area of study that explores both adversarial attack strategy and detection systems of adversarial attacks, which are inputs specially crafted to outwit the classification of detection systems or disrupt the training process of detection systems. In this research, we performed two adversarial attack scenarios, we used a Generative Adversarial Network (GAN) to generate synthetic intrusion traffic to test the influence of these attacks on the accuracy of machine learning-based Intrusion Detection Systems(IDSs). We conducted two experiments on adversarial attacks including poisoning and evasion attacks on two different types of machine learning models: Decision Tree and Logistic Regression. The performance of implemented adversarial attack scenarios was evaluated using the CICIDS2017 dataset. Also, it was based on a comparison of the accuracy of machine learning-based IDS before and after attacks. The results show that the proposed evasion attacks reduced the testing accuracy of both network intrusion detection systems models (NIDS). That illustrates our evasion attack scenario negatively affected the accuracy of machine learning-based network intrusion detection systems, whereas the decision tree model was more affected than logistic regression. Furthermore, our poisoning attack scenario disrupted the training process of machine learning-based NIDS, whereas the logistic regression model was more affected than the decision tree.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call