Abstract

Machine learning-based intrusion detection systems (IDS) are essential security functions in conventional and software-defined networks alike. Their success and the security of the networks they protect depend on the accuracy of their classification results. Adversarial attacks against machine learning, which seriously threaten any IDS, are still not countered effectively. In this study, we first develop a method that employs generative adversarial networks to produce adversarial attack data. Then, we propose RAIDS, a robust IDS model, designed to be resilient against adversarial attacks. In RAIDS, an autoencoder's reconstruction error is used as a prediction value for a classifier. Also, to prevent the attacker from guessing about the feature set, multiple feature sets are created and used to train baseline machine learning classifiers. A LightGBM classifier is then trained with the results produced by two autoencoders and an ensemble of baseline machine learning classifiers. The results show that the proposed robust model can increase overall accuracy by at least 13.2% and F1-score by more than 110% against adversarial attacks without the need for adversarial training.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call