Abstract

Deep neural networks (DNNs), which learn a hierarchical representation of features, have shown remarkable performance in big data analytics of remote sensing. However, previous research indicates that DNNs are easily spoofed by adversarial examples, which are crafted images with artificial perturbations that fool DNN models toward wrong predictions. To comprehensively evaluate the impact of adversarial examples on the remote sensing image (RSI) scene classification, this study tests eight state-of-the-art classification DNNs on six RSI benchmarks. These data sets include both optical and synthetic-aperture radar (SAR) images of different spectral and spatial resolutions. In the experiment, we create 48 classification scenarios and use four cutting-edge attack algorithms to investigate the influence of the adversarial example on the classification of RSIs. The experimental result shows that the fooling rates of the attacks are all over 98% across the 48 scenarios. We also find that, for the optical data, the seriousness of the adversarial problem has a negative relationship with the richness of the feature information. Besides, adversarial examples generated from SAR images are used easily for fooling the models with an average fooling rate of 76.01%. By analyzing the class distribution of these adversarial examples, we find that the distribution of the misclassifications is not affected by the types of models and attack algorithms-adversarial examples of RSIs of the same class cluster on fixed several classes. The analysis of classes of adversarial examples not only helps us explore the relationships between data set classes but also provides insights for further designing defensive algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call