Abstract

Medical image diagnosis is a time-consuming process when done manually, where the predictions are subjected to human error. Various Deep Learning models have brought about an efficient and reliable automated system for medical image analysis. However, these models are highly vulnerable to attacks, upon exposure of which the models lose their reliability and misclassify the input images. Adversarial attack is one such technique which fools the deep learning models with deceptive data. DeepFool is an adversarial attack that efficiently computes perturbations that fool deep networks. With the help of two different datasets, we studied the impact of DeepFool attack on EfficientNet-B0 model in this research. There are several defense mechanisms to protect the model against various attacks. Adversarial training is one such defense method, which trains the model towards a particular attack. In this study, we have also analysed how effectively adversarial training would defend a model and make it robust.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.