Abstract

Machine learning-based methods have been widely used in malware detection. However, recent studies show that models based on machine learning (or deep learning) are vulnerable to adversarial attacks. For example, slight perturbation to input can cause the models to produce false detection results with high confidence. Although some research efforts have been made to defend against adversarial attacks, the existing methods suffer from limitations in terms of detection accuracy and labeling cost. To address this problem, we propose an ensemble learning framework for Windows malware adversarial defense that contains two methods. The first one is an adversarial sample detection method to defeat specific adversarial attacks. This method takes malware features into groups and uses ensemble learning to detect the adversarial sample. The second one is an anomaly detection method to defend against agnostic adversarial attacks. This method regards adversarial samples as outliers and utilizes unsupervised and semi-supervised learning to construct anomaly detection models. We use the adversarial defense methods proposed as supplementary modules to the original malware detection models. Experiments show that our methods can improve malware detection model robustness against adversarial attacks. Moreover, comparison experiments indicate that our methods outperform traditional adversarial training by about 11% on detection accuracy.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call