Abstract

• We developed twenty distinct malware detection models and investigated their adversarial robustness and evasion resistance. • We designed MalDQN agent based on DRL to perform Type-II evasion attack against the above twenty malware detection models. • The MalDQN attack reduced the average accuracy from 86.18 % to 55.85 % in the above twenty malware detection models. • The MalDQN evasion attack achieved an average fooling rate of 98 % against the above twenty malware detection models. • We proposed an adversarial defense to counter evasion attacks and improve the generalizability of malware detection models. The last decade has witnessed a massive malware boom in the Android ecosystem. Literature suggests that artificial intelligence/machine learning based malware detection models can potentially solve this problem. But, these detection models are often vulnerable to adversarial samples developed by malware designers. Therefore, we validate the adversarial robustness and evasion resistance of different malware detection models developed using machine learning in this work. We first designed a neural network agent ( MalDQN ) based on deep reinforcement learning that adds noise via perturbations to the malware applications and converts them into adversarial malware applications. Malware designers can also generate these samples and use them to perform evasion attacks and fool the malware detection models. The proposed MalDQN agent achieved an average 98 % fooling rate against twenty distinct malware detection models based on a variety of classification algorithms (standard, ensemble, and deep neural network) and two different features (android permission and intent). The MalDQN evasion attack reduced the average accuracy from 86.18 % to 55.85 % in the twenty malware detection models mentioned above. Later, we also developed defensive measures to counter such evasion attacks. Our experimental results show that the proposed defensive strategies considerably improve the capability of different malware detection models to detect adversarial applications and build resistance against them.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call