Abstract

Some recent articles have revealed that synthetic aperture radar automatic target recognition (SAR-ATR) models based on deep learning are vulnerable to the attacks of adversarial examples and cause security problems. The adversarial attack can make a deep convolutional neural network (CNN)-based SAR-ATR system output the intended wrong label predictions by adding small adversarial perturbations to the SAR images. The existing optimization-based adversarial attack methods generate adversarial examples by minimizing the mean-squared reconstruction error, causing smooth target edge and blurry weak scattering centers in SAR images. In this paper, we build a UNet-generative adversarial network (GAN) to refine the generation of the SAR-ATR models’ adversarial examples. The UNet learns the separable features of the targets and generates the adversarial examples of SAR images. The GAN makes the generated adversarial examples approximate to real SAR images (with sharp target edge and explicit weak scattering centers) and improves the generation efficiency. We carry out abundant experiments using the proposed adversarial attack algorithm to fool the SAR-ATR models based on several advanced CNNs, which are trained on the measured SAR images of the ground vehicle targets. The quantitative and qualitative results demonstrate the high-quality adversarial example generation and excellent attack effectiveness and efficiency improvement.

Highlights

  • As an active imaging sensor, synthetic aperture radar (SAR) has the advantages of collecting all-time, all-weather, high-resolution images [1,2,3]

  • We attack different synthetic aperture radar automatic target recognition (SAR-automatic target recognition (ATR)) models based on the deep convolutional neural network (CNN) (AlexNet, VGGNet16, ResNet32) under the condition of the white-box attack, which means that the network structures and parameters of the recognition models are known

  • An adversarial attack method based on UNet and generative adversarial network (GAN) for deep learning

Read more

Summary

Introduction

As an active imaging sensor, synthetic aperture radar (SAR) has the advantages of collecting all-time, all-weather, high-resolution images [1,2,3]. Szegedy et al [13] first discover that by injecting well-designed tiny perturbations into image samples, adversarial examples can be intentionally produced to cause the recognition model to misclassify. This process of generating adversarial examples is named as “adversarial attack”, which has become a recent study trend [14,15,16,17,18,19] in the research field of remote sensing, radar, radio, etc. In radar signal processing, [14,15] verify that high-resolution range profile (HRRP) and SAR image target recognition models can be attacked successfully by well-designed adversarial examples. The work [18] systematically analyzes the

Methods
Results
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.