Abstract

Although deep neural networks (DNNs) have achieved excellent performance on hyperspectral image (HSI) classification tasks, their robustness is threatened by carefully created adversarial examples. Therefore, adversarial defense methods have provided an effective defense strategy to protect HSI classification networks. However, most defense models are highly dependent on known types of adversarial examples, which leads to poor generalization to defend against unknown attacks. In this study, we propose an attack-invariant attention feature-based defense (AIAF-Defense) model to improve the generalization ability of the defense model. Specifically, the AIAF-Defense model has an encoder–decoder structure to remove the perturbations from the HSI adversarial examples. We design a feature-disentanglement network as the encoder structure to decouple the attack-invariant spectral–spatial feature and attack-variant feature in the adversarial example and apply a decoder structure to reconstruct the legitimate HSI example. In addition, an attention-guided reconstruction loss is proposed to address the attention-shift problem caused by perturbation and provide an attention constraint for the extraction of attack-invariant attention features. Extensive experiments are conducted on three benchmark hyperspectral image datasets, the PaviaU, HoustonU 2018, and Salinas datasets, and the obtained results show that the proposed AIAF-Defense model improves the defense ability on both known and unknown adversarial attacks. The code is available at https://github.com/AAAA-CS/AIAF_HyperspectralAdversarialDefense.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call