Abstract
It is well known that the majority of neural networks widely employed today are extremely susceptible to adversarial perturbations which causes the misclassification of the output. This, in turn, can cause severe security concerns. In this paper, we meticulously evaluate the robustness of prominent pre-trained deep learning models against images that are modified with the Fast Gradient Sign Method (FGSM) attack. For this purpose, we have selected the following models: InceptionV3, InceptionResNetV2, ResNet152V2, Xception, DenseNet121, and MobileNetV2. All these models are pre-trained on ImageNet, and hence, we use our custom 10- animals test dataset to produce clean as well as misclassified output. Rather than focusing solely on prediction accuracy, our study uniquely quantifies the perturbation required to alter output labels, shedding light on the models' susceptibility to misclassification. The outcomes underscore varying vulnerabilities among the models to FGSM attacks, providing nuanced insights crucial for fortifying neural networks against adversarial threats. Key Words: Adversarial Perturbations, Deep Learning, ImageNet, FGSM Attack, Neural Networks, Pre-trained Models
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.