Abstract

The rapid development of remote sensing technology let us acquire a large collection of remote sensing scene images with high resolution. Aerial scene classification has become a crucial problem for understanding high-resolution remote sensing imagery. In this letter, we propose a novel framework for aerial scene classification. Unlike some traditional methods in which the features are produced by using handcrafted feature descriptors, our proposed method uses the raw RGB network stream and the saliency coded network stream to extract two different types of informative features. Then, we further propose a deep feature fusion model to fuse these two sets of features for final classification. The comprehensive performance evaluation of our proposed method is tested on two publicly available remote sensing scene classification benchmarks, i.e., the UC-Merced dataset and the AID dataset. Experimental results show that our proposed method achieves satisfactory results and outperforms the state-of-the-art approaches.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.