Abstract

Machine learning (ML) is fundamentally changing our way of life with the recent availability of high computational power and big data. Emerging ML‐based techniques of network intrusion detection systems (NIDS) can detect complex cyberattacks, undetectable by conventional techniques. In this chapter, we evaluate the threat of a generative adversarial networks (GAN) aided‐attack on these systems. In our threat model, an adversarial attacker, given access to a training data of the NIDS, adds a minimal perturbation to the feature values of attack traffic to change the DNN's prediction from “malicious” to “benign.” We evaluate our attack algorithm against two state‐of‐the‐art DNN models as well as our own well‐trained DNN model achieving nearly 100% success rates in whitebox setting. We also show that adversarial traffic crafted on these three DNN models also transfer and fool the NIDS models trained with classic ML algorithms with a high accuracy: logistic regression, support vector machine, decision tree and k ‐nearest neighbors. Our work shows that ML‐based NIDS are vulnerable to adversarial network traffic crafted by our GAN‐based attack algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call