Abstract

Convolution Neural Network (CNN) models have gained ground in research activities particularly in medical images used for Diabetes Retinopathy (DR) detection. X-ray, MRI, and CT scans have all been used to validate CNN models, with classification accuracy generally reaching that of trained doctors. It is mandatory to evaluate the strength of CNN models used in medical tasks against adversarial attacks especially in healthcare; that is to say, the security of such models is becoming extremely relevant to the diagnosis as this latter will guide high-stakes decision-making. However, little study has been conducted to better comprehend this issue. This paper focuses on MobileNet CNN architecture in order to investigate its vulnerability against fast gradient sign methods (FGSM) adversarial attacks. For this end, a Neural Structure Learning (NSL) and a Multi-Head Attention (MHA) have been used to effectively reduce the vulnerability against attack by end-to-end CNN training with adversarial neighbors that produce adversarial perturbations on optical coherence tomography (OCT) images. With suggested model NSL-MHA-CNN, there has been an ability to maintain model performance on adversarial attack without increasing cost of training. Through theoretical assistance and empirical validation, it was possible to examine the stability of MobileNet architecture and demonstrate its susceptibility, particularly to adversarial attack. The experiments in this paper show that indiscernible degrees of perturbation ε < 0.01 were sufficient to cause a task failure resulting to misclassification in majority of the time. Moreover, empirical simulation shows that the proposed approach advanced in this paper can be an effective method to defense against adversarial attack at level of CNN model testing.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.