Abstract

Deep neural networks play a significant role in hyperspectral image (HSI) processing, yet they can be easily fooled when trained with adversarial samples (generated by adding tiny perturbations to clean samples). These perturbations are invisible to the human eye, but can easily lead to incorrect classifications by the deep learning model. Recent research on defense against adversarial samples in HSI classification has improved the robustness of deep networks by exploiting global contextual information. However, available methods do not distinguish between different classes of contextual information, which makes the global context unreliable and increases the success rate of attacks. To solve this problem, we propose a robust context-aware network able to defend against adversarial samples in HSI classification. The proposed model generates a global contextual representation by aggregating the features learned via dilated convolution, and then explicitly models intra-class and inter-class contextual information by constructing a class context-aware learning module (including affinity loss) to further refine the global context. The module helps pixels obtain more reliable long-range dependencies and improves the overall robustness of the model against adversarial attacks. Experiments on several benchmark HSI datasets demonstrate that the proposed method is more robust and exhibits better generalization than other advanced techniques.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call