Abstract

Due to the powerful ability of data fitting, deep neural networks have been applied in a wide range of applications in many key areas. However, in recent years, it was found that some adversarial samples easily fool the deep neural networks. These input samples are generated by adding a few small perturbations based on the original sample, making a very significant influence on the decision of the target model in the case of not being perceived. Image segmentation is one of the most important technologies in the medical image and automatic driving field. This paper mainly explores the security of deep neural network models based on the image segmentation tasks. Two lightweight image segmentation models on the embedded device suffered from the white-box attack by using local perturbations and universal perturbations. The perturbations are generated indirectly by a noise function and an intermediate variable so that the gradient of pixels can be propagated unlimitedly. Through experiments, we find that different models have different blind spots, and the adversarial samples trained for a single model have no transferability. In the end, multiple models are attacked by our joint learning. Finally, under the constraint of low perturbation, most of the pixels in the attacked area have been misclassified by both lightweight models. The experimental result shows that the proposed adversary is more likely to affect the performance of the segmentation model compared with the FGSM.

Highlights

  • Deep neural networks have been widely applied in various fields, including computer vision, speech recognition, natural language processing and robotics [1]

  • In the field of computer vision, semantic image segmentation is an essential method of scene understanding that can be used for autonomous driving, video

  • In the era of artificial intelligence (AI), most computer vision techniques are based on image segmentation, and research on image segmentation techniques has been underway for decades

Read more

Summary

INTRODUCTION

Deep neural networks have been widely applied in various fields, including computer vision, speech recognition, natural language processing and robotics [1]. 3. Through experiments, we show that this adversarial learning on the deep neural network for image segmentation task is not transferable, so a kind of adversarial attack method based on multi-model joint learning is proposed. Xie et al [41] proposed dense adversary generation for segmentation and detection so that the perturbations can be transferred across networks with different training data, based on different architectures, and even for different recognition tasks. The analysis and explanation for the comparison of the two methods are displayed in experimental results

NON-LINEAR ADVERSARIAL SAMPLES GENERATION
ADVERSARIAL PERTURBATIONS TO LOCAL SOURCE DOMAIN
EXPERIMENTS
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.