Abstract

Adversarial attacks aim to deceive the target system. Recently, deep learning methods have become a target of adversarial attacks. Even small perturbations could lead to classification errors in deep learning models. In an intrusion detection system using deep learning method, adversarial attack can cause classification error and malicious traffic can be classified as benign. In this study, the effects of adversarial attacks on the accuracy of deep learning-based intrusion detection systems were examined. CICIDS2017 dataset was used to test the detection systems . At first, DDoS attacks weredetected using Autoencoder, MLP, AEMLP, DNN, AEDNN, CNN and AECNN methods. Then, the Fast Gradient Sign Method (FGSM) is used to perform adversarial attacks. Finally, the sensitivity of the methods against the adversarial attacks were examined. Our results showed that the classification performance of deep learning based detection methods decreased up to %17 after the adversarial attacks. The results obtained in this study form the basis for the validation and validation studies of learning-based intrusion detection systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call