Abstract

Synthetic aperture radar (SAR) can perform observations at all times and has been widely used in the military field. Deep neural network (DNN)-based SAR target recognition models have achieved great success in recent years. Yet, the adversarial robustness of these models has received far less academic attention in the remote sensing community. In this article, we first present a comprehensive adversarial robustness evaluation framework for DNN-based SAR target recognition. Both data-oriented metrics and model-oriented metrics have been used to fully assess the recognition performance under adversarial scenarios. Adversarial training is currently one of the most successful methods to improve the adversarial robustness of DNN models. However, it requires class labels to generate adversarial attacks and suffers significant accuracy dropping on testing data. To address these problems, we introduced adversarial self-supervised learning into SAR target recognition for the first time and proposed a novel unsupervised adversarial contrastive learning-based defense method. Specifically, we utilize a contrastive learning framework to train a robust DNN with unlabeled data, which aims to maximize the similarity of representations between a random augmentation of a SAR image and its unsupervised adversarial example. Extensive experiments on two SAR image datasets demonstrate that defenses based on adversarial self-supervised learning can obtain comparable robust accuracy over state-of-the-art supervised adversarial learning methods.

Highlights

  • Synthetic aperture radar (SAR) actively emits microwaves and improves azimuth resolution through the principle of a synthetic aperture to obtain large-area high-resolution radar images [1]

  • Shao et al [3] analyzed the performance of different deep neural network (DNN) on the MSTAR [4] dataset according to classification accuracy, training time, and some other metrics to verify the superiority of DNNs for SAR target recognition

  • We evaluate models trained with fewer data to simulate a situation in which there are insufficient data

Read more

Summary

Introduction

Synthetic aperture radar (SAR) actively emits microwaves and improves azimuth resolution through the principle of a synthetic aperture to obtain large-area high-resolution radar images [1]. Based on this theory, they proposed a gradient-based approach to generate noted that the linear nature of DNN is the primary cause for its vulnerability to adversarial adversarial examples, named the fast gradient sign method (FGSM). [11] proposed the DeepFool algorithm to simplify L-BFGS and fool deep models, and adversarial examples, named the fast gradient sign method (FGSM). Moosavi-Dezfooli very small perturbation vectors to original images could cause error outputs for different et al [13] further found that the existence of universal adversarial examples by adding very. Deep SAR target recognition models areare more more likely to suffer from the overfitting problem, resulting in a weaker generalization likely to suffer from the overfitting problem, resulting in a weaker generalization capability capability and greater sensitivity to perturbation [14].their.

Illustration of adversarial attacks
Definition of Adversarial Robustness
Adversarial Robustness Evaluation
Model-Oriented
Data-Oriented Robustness Metrics
Drawbacks of Adversarial Training
Adversarial
Adversarial Self-Supervised Learning Defenses
Unsupervised
Experimental Results
Experimental Design and Settings
Design
32 GB of RAM and one NVIDIA
Evaluation on Adversarial Attacks
Evaluation of Adversarial Defenses
Method
Visualization of DNNs
Guided backpropagation of MSTAR model in in the of clean
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call