Abstract
In this paper, we study dependence of the success rate of adversarial attacks to the Deep Neural Networks on the biomedical image type, control parameters, and image dataset size. With this work we are going to contribute towards accumulation of experimental results on adversarial attacks for the community dealing with biomedical images. The white-box Projected Gradient Descent attacks were examined based on 8 classification tasks and 13 image datasets containing a total of 605,080 chest X-ray and 317,000 histology images of malignant tumors. We concluded that: (1) An increase of the amplitude of perturbation in generating malicious adversarial images leads to a growth of the fraction of successful attacks for the majority of image types examined in this study. (2) Histology images tend to be less sensitive to the growth of amplitude of adversarial perturbations. (3) Percentage of successful attacks is growing with an increase of the number of iterations of the algorithm of generating adversarial perturbations with an asymptotic stabilization. (4) It was found that the success of attacks dropping dramatically when the original confidence of predicting image class exceeds 0.95. (5) The expected dependence of percentage of successful attacks on the size of image training set was not confirmed.
Highlights
Introduction1.1 The Problem of Security of Computerized Diagnosis
1.1 The Problem of Security of Computerized DiagnosisIt is well recognized that the security issues of computerized disease diagnosis are of paramount importance
We concluded that: (1) An increase of the amplitude of perturbation in generating malicious adversarial images leads to a growth of the fraction of successful attacks for the majority of image types examined in this study
Summary
1.1 The Problem of Security of Computerized Diagnosis. It is well recognized that the security issues of computerized disease diagnosis are of paramount importance. The Deep Learning technologies gave the community well-grounded promises to become an effective tool in biomedical image analysis and computerized diagnosis [1, 2]. It was found that along with the high success, the Deep Learning brought some new security problems. This time the security worries arose from the vulnerability of methods capitalizing on Deep Neural Networks (DNN) to so-called adversarial attacks. A bit later, in 2015-2016 a group of researchers has provided several examples of the vulnerability of DNNs to adversarial attacks [5]. There have been several works published on the problem of adversarial attacks, their types, and possible ways of defense (see, for example, surveys [6, 7] and paper 8)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have