Abstract

Nowadays, deep learning is becoming the most strong and efficient framework, which can be implemented in a wide range of areas. Particularly, advances of modern deep learning approaches have proven their effectiveness in building next generation smart intrusion detection systems (IDSs). However, deep learning-based systems are still vulnerable to adversarial examples, which can destroy the robustness of the models. Poisoning attack is a family of adversarial attacks against machine learning-based models. Generally, an adversary has the ability to inject a small proportion of malicious samples into training dataset to degrade the performance of victim’s models. The robustness of deep learning-based IDSs has been becoming a really important concern. In this work, we investigate poisonous attacks against deep learning-based network intrusion detection systems. We clarify the general attack strategy, perform experiments on multiple datasets including CTU13-08, CTU13-09, CTU13-10 and CTU13-13. Experimental results have shown that only a small amount of injected samples has drastically reduced the performance of the deep learning-based IDSs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call