Abstract

Remote sensing image (RSI) scene classification is the foundation and important technology of ground object detection, land use management and geographic analysis. During recent years, convolutional neural networks (CNNs) have achieved significant success and are widely applied in RSI scene classification. However, crafted images that serve as adversarial examples can potentially fool CNNs with high confidence and are hard for human eyes to interpret. For the increasing security and robust requirements of RSI scene classification, the adversarial example problem poses a serious problem for the classification results derived from systems using CNN models, which has not been fully recognized by previous research. In this study, to explore the properties of adversarial examples of RSI scene classification, we create different scenarios by testing two major attack algorithms (i.e., the fast gradient sign method (FGSM) and basic iterative method (BIM)) trained on different RSI benchmark datasets to fool CNNs (i.e., InceptionV1, ResNet and a simple CNN). In the experiment, our results show that CNNs of RSI scene classification are also vulnerable to adversarial examples, and some of them have a fooling rate of over 80%. These adversarial examples are affected by the architecture of CNNs and the type of RSI dataset. InceptionV1 has a fooling rate of less than 5%, which is lower than the others. Adversarial examples generated on the UCM dataset are easier than other datasets. Importantly, we also find that the classes of adversarial examples have an attack selectivity property. Misclassifications of adversarial examples of RSIs are related to the similarity of the original classes in the CNN feature space. Attack selectivity reveals potential classes of adversarial examples and provides insights into the design of defensive algorithms in future research.

Highlights

  • With the advancement of remote sensing technology, the automatic interpretation of remote sensing images (RSIs) has greatly improved [1]–[4]

  • By analyzing the adversarial example problem of RSI scene classification, we find that mainstream convolutional neural networks (CNNs) models used for RSI scene classification are vulnerable to adversarial examples

  • The experiment shows that adversarial examples are difficult for the human eye to recognize, and they allow the RSI scene classification system to achieve false results

Read more

Summary

INTRODUCTION

With the advancement of remote sensing technology, the automatic interpretation of remote sensing images (RSIs) has greatly improved [1]–[4]. L. Chen et al.: Attack Selectivity of Adversarial Examples in RSI Scene Classification. Chaib et al [35] used the CNN model in feature extraction and discriminant correlation analysis (DCA) for data fusion These RSI scene classification models perform much better in terms of accuracy than traditional RSIs and in the efficiency of RSI scene classification achieved using high-performance computing. We train several CNN models with the widest application in RSI scene classification systems In these high-accuracy CNNs, we use a variety of attack algorithms to generate different adversarial examples. We find that the fundamental issues related to adversarial examples of RSIs are the model vulnerability and attack selectivity This means that different RSI scene classification CNNs have different security characteristics, so the cost to obtain adversarial examples varies.

RELATED WORK
CONVOLUTIONAL NEURAL NETWORKS
EXPERIMENTS
Findings
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call