Abstract

Machine learning (ML) is increasingly used for malicious traffic detection and proven to be effective. However, ML-based detections are at risk of being deceived by adversarial examples. It is critical to carry out adversarial attacks to evaluate the robustness of detections. Some research papers have studied adversarial attacks on ML-based detections, while most of them are in unreal scenarios. It mainly includes two aspects: (i) adversarial attacks gain extra prior knowledge about ML-based models, such as the datasets and features used by the model, which are unlikely to be available in reality; (ii) adversarial attacks generate unpractical examples, which are traffic features or traffic that doesn’t compliance with communication protocol rules.In this paper, we propose an adversarial attack framework GPMT, which generates practical adversarial malicious traffic to deceive the ML-based detection. Compared with previous work, our work mainly has the following advantages: (i) little prior knowledge: we limit the possessed prior knowledge to simulate black-box attacks for real situations; (ii) more adversarial and practical examples: we employ Wasserstein GAN (WGAN) to execute adversarial attacks and design a novel loss function, which generates practical adversarial examples that are more likely to deceive detections. We attack nine ML-based models in the CTU-13 dataset to demonstrate the framework’s validity. Experimental results show that GPMT is more effective and versatile than other methods. For nine models, mean evasion increase rate (EIR) can reach 65.53%, which is 16.48% higher than the best of related methods, DIGFuPAS. In addition, we test other datasets to verify the generality of the framework. The experiment shows that our attack is equally applicable.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call