Abstract

Few-shot scene classification aims to recognize unseen scene concepts from few labeled samples. However, most existing works are generally inclined to learn metalearners or transfer knowledge while ignoring the importance to learn discriminative representations and a proper metric for remote sensing images. To address these challenges, in this article, we propose an end-to-end network for boosting a few-shot remote sensing image scene classification, called discriminative learning of adaptive match network (DLA-MatchNet). Specifically, we first adopt the attention technique to delve into the interchannel and interspatial relationships to automatically discover discriminative regions. Then, the channel attention and spatial attention modules can be incorporated with the feature network by using different feature fusion schemes, achieving “discriminative learning.” Afterward, considering the issues of the large intraclass variances and interclass similarity of remote sensing images, instead of simply computing the distances between the support samples and query samples, we concatenate the support and query discriminative features in depth and utilize a matcher to “adaptively” select the semantically relevant sample pairs to assign similarity scores. Our method leverages an episode-based strategy to train the model. Once trained, our model can predict the category of query image without further fine-tuning. Experimental results on three public remote sensing image data sets demonstrate the effectiveness of our model in the few-shot scene classification task.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.