Abstract

Adversarial attacks represent a critical issue that prevents the reliable integration of machine learning methods into cyber defense systems. Past work has shown that even proficient detectors are highly affected just by small perturbations to malicious samples, and that existing countermeasures are immature. We address this problem by presenting AppCon, an original approach to harden intrusion detectors against adversarial evasion attacks. Our proposal leverages the integration of ensemble learning to realistic network environments, by combining layers of detectors devoted to monitor the behavior of the applications employed by the organization. Our proposal is validated through extensive experiments performed in heterogeneous network settings simulating botnet detection scenarios, and consider detectors based on distinct machine- and deep-learning algorithms. The results demonstrate the effectiveness of AppCon in mitigating the dangerous threat of adversarial attacks in over 75% of the considered evasion attempts, while not being affected by the limitations of existing countermeasures, such as performance degradation in non-adversarial settings. For these reasons, our proposal represents a valuable contribution to the development of more secure cyber defense platforms.

Highlights

  • Adversarial attacks represent a dangerous menace for real world implementations of machine learning (ML) algorithms [1,2,3,4]

  • We propose AppCon, an original approach that is focused at mitigating the impact of adversarial evasion attacks against ML-based network intrusion detection systems (NIDS), while preserving detection performance in the absence of adversarial attacks

  • Among the dozens of existing supervised algorithms, this paper focuses on those classifiers that have been found to be effective for scenarios of Network Intrusion Detection [1,3,28]: Decision Tree (DT), Random Forest (RF), AdaBoost (AB), and Multi-Layer Perceptron (MLP); we consider a fifth method based on deep learning and proposed by Google, Wide and Deep (WnD)

Read more

Summary

Introduction

Adversarial attacks represent a dangerous menace for real world implementations of machine learning (ML) algorithms [1,2,3,4]. This threat involves the production of specific samples that induce the machine learning model to generate an output that is beneficial to an attacker. Literature has identified two categories of adversarial attacks [2]: those occurring at training-time ( known as poisoning attacks [5]), and those occurring at test-time (often referred to as evasion attacks [6]). The topic of adversarial machine learning has been thoroughly studied by computer vision literature [7,8,9]. The field of network intrusion detection is poorly investigated [10,11], while multiple works exist in the areas of malware, phishing, and spam detection [6,12,13,14,15,16,17]

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call