Abstract
With the extensive deployment of deep learning, the research on adversarial example receives more concern than ever before. By modifying a small fraction of the original image, an adversary can lead a well-trained model to make a wrong prediction. However, existing works about adversarial attack and defense mainly focus on image classification but pay little attention to more practical tasks like segmentation. In this work, we propose a query-based black-box attack that could alter the classes of foreground pixels within a limited query budget. The proposed method improves the Adaptive Square Attack by employing a more accurate gradient estimation of loss and replacing the fixed variance of adaptive distribution with a learnable one. We also adopt a novel loss function proposed for attacking medical image segmentation models. Experiments on a widely-used dataset and well-known models demonstrate the effectiveness and efficiency of the proposed method in attacking medical image segmentation models. The implementation code and extensive analysis are available at https://github.com/Ikracs/medical_attack.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.