Abstract

With the rapid development of remote sensing data acquisition technology, there are multimodal images over the same observed scenes. These multimodal remote sensing images could provide complementary valuable information for land cover classification. In this article, we propose a novel self-supervised feature learning and few-shot classification model for multimodal remote sensing images, called S2FL. Specifically, a contrastive learning architecture is investigated to learn spatial feature representations from very high resolution (VHR) image. And the spectral features from hyperspectral data are integrated with learned spatial features for few-shot land cover classification. Classification experiments are conducted on a widely-used dataset, i.e., Houston 2018, to verify the effectiveness and superiority of the proposed S2FL model compared with several state-of-the-art baseline approaches.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call