Abstract

In view of the small number of categories and the relatively little amount of labeled data, it is challenging to apply the fusion of deep convolution features directly to remote sensing images. To address this issue, we propose a pyramid multi-subset feature fusion method, which can effectively fuse the deep features extracted from different pre-trained convolutional neural networks and integrate the global and local information of the deep features, thereby obtaining stronger discriminative and low-dimensional features. By introducing the idea of weighting the difference between different categories, the weight discriminant correlation analysis method is designed to make it pay more attention to those categories that are not easy to distinguish. In order to mine global and local feature information, the pyramid method is employed to divide feature fusion into several layers. Each layer divides the features into several subsets and then performs feature fusion on the corresponding feature subsets, and the number of subsets from top to bottom gradually increases. Feature fusion at the top of the pyramid obtains a global representation, while feature fusion at the bottom obtains a local detail representation. Our experiment results on three public remote sensing image data sets demonstrate that the proposed multi-deep features fusion method produces improvements over other state-of-the-art deep learning methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.