Abstract

Adversarial Malware Example (AME)-based adversarial training can effectively enhance the robustness of Machine Learning (ML)-based malware detectors against AME. AME quality is a key factor to the robustness enhancement. Generative Adversarial Network (GAN) is a kind of AME generation method, but the existing GAN-based AME generation methods have the issues of inadequate optimization, mode collapse and training instability. In this paper, we propose a novel approach (denote as LSGAN-AT) to enhance ML-based malware detector robustness against Adversarial Examples, which includes LSGAN module and AT module. LSGAN module can generate more effective and smoother AME by utilizing brand-new network structures and Least Square (LS) loss to optimize boundary samples. AT module makes adversarial training using AME generated by LSGAN to generate ML-based Robust Malware Detector (RMD). Extensive experiment results validate the better transferability of AME in terms of attacking 6 ML detectors and the RMD transferability in terms of resisting the MalGAN black-box attack. The results also verify the performance of the generated RMD in the recognition rate of AME.

Highlights

  • Malware is being regarded as a severe threat to cybersecurity (Zhou and Jiang 2012; Christodorescu et al 2005)

  • Evaluation metrics Five metrics are used to evaluate the performance of the model and detectors: ACCuracy (ACC), Adversarial example Effectiveness Rate (AER), RECognition rate (REC), True Positive Rate (TPR), and False Positive Rate (FPR)

  • In the LSGAN module, we deploy a well-designed union detector to fit Multi-Layer Perceptron (MLP) which has been selected by employing several experiments

Read more

Summary

Introduction

Malware is being regarded as a severe threat to cybersecurity (Zhou and Jiang 2012; Christodorescu et al 2005). Machine Learning (ML) detectors have been explored for malware detection, and have achieved preferable detection performance (Yuan et al 2014; Lucas et al 2021). Their capability is challenged by adversarial attacks There are three major types of approaches of generating adversarial examples: gradient-based, optimization-based, and Generative Adversarial Networks (GAN)-based (Xiao et al 2018). The first two types of approaches have three major issues: (1) need to access the white-box architecture and have the knowledge of model parameters all the time (Xiao et al 2018), (2) their optimization process is slow and can only optimize perturbation for one specific instance each time (Xiao et al 2018), (3) the low perception quality of the adversarial examples (Wang et al 2020).

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call