Abstract

Mobile phones have crawled into our lives with such rapidity and have reformed our lives in a short span. Malware is entangled with all forms of mobile applications causing havoc and distress. State of the art malware detection systems have exercised learning-based techniques successfully to discriminate benign contents from malware. But, Machine Learning (ML) models are vulnerable to adversarial samples and are not intrinsically robust against adversarial attacks. The adversarial samples generated against ML models degrade the model's performance. Adversarial attacks are utilized by malware authors to hinder the working of ML-based malware detection approaches. This article coheres into the effects of evasion attacks on an anti-malware engine utilizing a feed forward deep neural network model. Experiments on Android malware apps is explored by structuring a comprehensive feature engineering scheme for the Drebin dataset through static analysis. The results demonstrate the realistic threat and demand the need to develop adaptive defenses to foster a secure learning model which is immune to adversarial attacks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call