Abstract
The ever-evolving cybersecurity environment has given rise to sophisticated adversaries who constantly explore new ways to attack cyberinfrastructure. Recently, the use of deep learning-based intrusion detection systems has been on the rise. This rise is due to deep neural networks (DNN) complexity and efficiency in making anomaly detection activities more accurate. However, the complexity of these models makes them black-box models, as they lack explainability and interpretability. Not only is the DNN perceived as a black-box model, but recent research evidence has also shown that they are vulnerable to adversarial attacks. This paper developed an adversarial robust and explainable network intrusion detection system based on deep learning by applying adversarial training and implementing explainable AI techniques. In our experiments with the NSL-KDD dataset, the PGD adversarial-trained model was a more robust model than DeepFool adversarial-trained and FGSM adversarial-trained models, with a ROC-AUC of 0.87. The FGSM attack did not affect the PGD adversarial-trained model’s ROC-AUC, while the DeepFool attack caused a minimal 9.20% reduction in PGD adversarial-trained model’s ROC-AUC. PGD attack caused a 15.12% reduction in the DeepFool adversarial-trained model’s ROC-AUC and a 12.79% reduction in FGSM trained model’s ROC-AUC.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.