Abstract

In the field of medical science, the reliability of the results produced by deep learning classifiers on disease diagnosis plays a crucial role. The reliability of the classifier substantially reduces by the presence of adversarial examples. The adversarial examples mislead the classifiers to give wrong prediction with equal or more confidence than the actual prediction. The adversarial attacks in the black box type is done by creating a pseudo model that resembles the target model. From the pseudo model, the attack is created and is transferred to the target model. In this work, the Fast Gradient Sign Method and its variants Momentum Iterative Fast Gradient Sign Method, Projected Gradient Descent and Basic Iterative Method are used to create adversarial examples on a target VGG-16 model. The datasets used are Diabetic Retinopathy 2015 Data Colored Resized and SARS-CoV-2 CT Scan Dataset. The experimentation revealed that the transferability of attack is true for the above described attack methods on a VGG-16 model. Also, the Projected Gradient Descent attack provides a higher success in attack in comparison with the other methods experimented in this work.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call