Abstract

It has been demonstrated that deep neural network (DNN)-based synthetic aperture radar (SAR) automatic target recognition (ATR) techniques are extremely susceptible to adversarial intrusions, that is, malicious SAR images including deliberately generated perturbations that are imperceptible to the human eye but can deflect DNN inference. Attack algorithms in previous studies are based on direct access to a ATR model such as gradients or training data to generate adversarial examples for a target SAR image, which is against the non-cooperative nature of ATR applications. In this article, we establish a fully black-box universal attack (FBUA) framework to craft one single universal adversarial perturbation (UAP) against a wide range of DNN architectures as well as a large fraction of target images. It is of both high practical relevance for an attacker and a risk for ATR systems that the UAP can be designed by an FBUA in advance and without any access to the victim DNN. The proposed FBUA can be decomposed to three main phases: (1) SAR images simulation, (2) substitute model training, and (3) UAP generation. Comprehensive evaluations on the MSTAR and SARSIM datasets demonstrate the efficacy of the FBUA, i.e., can achieve an average fooling ratio of 64.6% on eight cutting-edge DNNs (when the magnitude of the UAP is set to 16/255). Furthermore, we empirically find that the black-box UAP mainly functions by activating spurious features which can effectively couple with clean features to force the ATR models to concentrate on several categories and exhibit a class-wise vulnerability. The proposed FBUA aligns with the non-cooperative nature and reveals the access-free adversarial vulnerability of DNN-based SAR ATR techniques, providing a foundation for future defense against black-box threats.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call