Abstract
Machine learning algorithms are widely utilized in cybersecurity. However, recent studies show that machine learning algorithms are vulnerable to adversarial examples. This poses new threats to the security-critical applications in cybersecurity. Currently, there is still a short of study on adversarial examples in the domain of cybersecurity. In this paper, we propose a new method known as the brute-force attack method to better evaluate the robustness of the machine learning classifiers in cybersecurity against adversarial examples. The proposed method, which works in a black-box way and covers some shortages of the existing adversarial attack methods based on generative adversarial networks, is simple to implement and only needs the output of the target classifiers to generate adversarial examples. To have a comprehensive evaluation of the attack performance of the proposed method, we use our method to generate adversarial examples against the common machine learning based security systems in cybersecurity including host intrusion detection systems, Android malware detection systems, and network intrusion detection systems. We compare the attack performance of the proposed method against these security systems with that of state-of-the-art adversarial attack methods based on generative adversarial networks. The preliminary experimental results show that the proposed method, which is more efficient in computation and outperforms the state-of-the-art attack methods based on generative adversarial networks, can be used to evaluate the robustness of various machine learning based systems in cybersecurity against adversarial examples.
Highlights
Most scenarios in cybersecurity, such as malware detection [1] and intrusion detection [2], can be viewed as classification problems
To avoid the tedious training of the Generative adversarial networks (GANs)-based adversarial attack methods and generate adversarial examples more efficiently, we propose a new and simple black-box attack method known as the brute-force attack method (BFAM) to better evaluate the robustness of the machine learning based systems in cybersecurity against Adversarial examples (AEs)
Our method achieves the attack performance comparable to attack method based on GAN (AAM-GAN) on logistic regression (LR)-based host intrusion detection systems (HIDSs), naive Bayes (NB)-based HIDS, multilayer perceptron (MLP)-based HIDS, and random forest (RF)-based HIDS
Summary
Most scenarios in cybersecurity, such as malware detection [1] and intrusion detection [2], can be viewed as classification problems. To avoid the tedious training of the GAN-based adversarial attack methods and generate adversarial examples more efficiently, we propose a new and simple black-box attack method known as the brute-force attack method (BFAM) to better evaluate the robustness of the machine learning based systems in cybersecurity against AEs. Our method is simple to implement compared with the GAN-based methods. We design three experiments involving different scenarios of cybersecurity, i.e., host intrusion detection [16], network intrusion detection [2], and Android malware detection [17], where the machine learning algorithms are widely used to improve the detection performance of the target systems. We propose a new method known as the brute-force attack method to generate AEs against machine learning based systems in cybersecurity.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.