Abstract

Mobile phones have crawled into our lives with such rapidity and have reformed our lives in a short span. Malware is entangled with all forms of mobile applications causing havoc and distress. State of the art malware detection systems have exercised learning-based techniques successfully to discriminate benign contents from malware. But, Machine Learning (ML) models are vulnerable to adversarial samples and are not intrinsically robust against adversarial attacks. The adversarial samples generated against ML models degrade the model's performance. Adversarial attacks are utilized by malware authors to hinder the working of ML-based malware detection approaches. This article coheres into the effects of evasion attacks on an anti-malware engine utilizing a feed forward deep neural network model. Experiments on Android malware apps is explored by structuring a comprehensive feature engineering scheme for the Drebin dataset through static analysis. The results demonstrate the realistic threat and demand the need to develop adaptive defenses to foster a secure learning model which is immune to adversarial attacks.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.