Abstract

Recent progress on remote sensing (RS) scene classification is substantial, benefiting mostly from the explosive development of convolutional neural networks (CNNs). However, different from the natural images in which the objects occupy most of the space, objects in RS images are usually small and separated. Therefore, there is still a large room for improvement of the vanilla CNNs that extract global image-level features for RS scene classification, ignoring local object-level features. In this article, we propose a novel RS scene classification method via enhanced feature pyramid network (EFPN) with deep semantic embedding (DSE). Our proposed framework extracts multiscale multilevel features using an EFPN. Then, to leverage the complementary advantages of the multilevel and multiscale features, we design a DSE module to generate discriminative features. Third, a feature fusion module, called two-branch deep feature fusion (TDFF), is introduced to aggregate the features at different levels in an effective way. Our method produces state-of-the-art results on two widely used RS scene classification benchmarks, with better effectiveness and accuracy than the existing algorithms. Beyond that, we conduct an exhaustive analysis on the role of each module in the proposed architecture, and the experimental results further verify the merits of the proposed method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.