Abstract

Research shows that deep neural networks are vulnerable to adversarial examples due to the highly linear nature of deep neural networks (DNNs). Therefore, adversarial examples involve security of deep learning. However, there is a lack of research on the impact of adversarial examples on the biomedical segmentation model. Since a large part of medical image problems are segmentation problems, this paper analyzes the impact of adversarial examples on image segmentation models based on deep learning. We propose to fool the biomedical segmentation model and generate target segmentation masks with feature space perturbations and cross-entropy loss function. Different from the traditional gradient-based attack methods, which usually use the gradient of the final loss function, this paper adopts a Multi-scale Attack (MSA) method based on multi-scale gradients. Extensive experiments to attack U-Net on the ISIC skin lesion segmentation challenge dataset and the glaucoma optic disc segmentation dataset have proved that the predicted mask generated by this method has a high intersection over union(IoU) and pixel accuracy with the target mask. Besides, the L2 and L∞ distances between the adversarial and clean examples are reduced compared with the state-of-the-art method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call