Abstract

Nowadays, several malicious applications target computers and mobile users. So, malware detection plays a vital role on the internet so that the device is secure without any malicious activity affecting or gathering the useful content of the user. Researches indicate that the vulnerability of adversarial attacks is more in deep neural networks. When there is a malicious sample in a family, there will not be many changes in the variants, but there will be more signatures. So, a deep learning model, DenseNet was used for detection. The adversarial samples are created by other types of noise, including the Gaussian noise. We added this noise to a subset of malware samples and observed that for Malimg, the modified samples were precisely identified by the DenseNet, and the attack cannot be done. But for BIG2015, we found that there was some marginal decrease in the performance of the classifier, which shows that the model performs well. Further, experiments on the Fast Gradient Sign Method (FGSM) were conducted, and it was observed that a significant decrease in classification accuracy was detected for both datasets. We understand that deep learning models should be robust to adversarial attacks.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.