Abstract

Remote sensing image scene classification is an important method for understanding the high-resolution remote sensing images. Based on convolutional neural network, various classification methods have been applied into this field and achieved remarkable results. These methods mainly rely on the semantic information to improve the classification performance. However, as the network goes deeper, the highly abstract and global semantic information makes it difficult for the network to accurately classify scene images with similar layout and structures, limiting further improvement of classification accuracy. Relying on the semantic information only is not sufficient to effectively classify these similar scene images and the network needs spatial information to enhance the classification capability. To solve this dilemma, this article proposes a best representation branch model, which reaches the optimal balance point where the network can make use of both the semantic information and spatial information to improve the final classification accuracy. In the proposed method, ResNet50 pretrained on the ImageNet dataset is first divided into four branches with different depths to extract feature maps and a capsule network is used as the classifier. The Grad-CAM algorithm is adopted to explain the mechanism of the optimal balance point from the perspective of attention and guide the further feature fusion. In addition, ablation studies are conducted to prove the effectiveness of our method and extensive experiments are conducted on three public benchmark remote sensing datasets. The results demonstrate that the proposed method can achieve competitive classification performance compared to the state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call