Abstract

Due to the wide application of remote sensing (RS) image scene classification, more and more scholars activate great attention to it. With the development of the convolutional neural network (CNN), the CNN-based methods of the RS image scene classification have made impressive progress. In the existing works, most of the architectures just considered the global information of the RS images. However, the global information contains a large number of redundant areas that diminish the classification performance and ignore the local information that reflects more fine spatial details of local objects. Furthermore, most CNN-based methods assign the same weights to each feature vector causing the mode to fail to discriminate the crucial features. In this article, a novel method by Two-branch Deep Feature Embedding (TDFE) with a dual attention-aware (DAA) module for RS image scene classification is proposed. In order to mine more complementary information, we extract global semantic-based features of high level and local object-based features of low level by the TDFE module. Then, to focus selectively on the key global-semantics feature maps as well as the key local regions, we propose a DAA module to attain those key information. We conduct extensive experiments to verify the superiority of our proposed method, and the experimental results obtained on two widely used RS scene classification benchmarks demonstrate the effectiveness of the proposed method.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.