Abstract
Few-shot learning features the capability of generalizing from very few examples. To realize few-shot scene classification of optical remote sensing images, we propose a two-stage framework that first learns a general-purpose representation and then propagates knowledge in a transductive paradigm. Concretely, the first stage jointly learns a semantic class prediction task as well as two auxiliary objectives in a multi-task model. Therein, rotation prediction estimates the 2D transformation of an input, and contrastive prediction aims to pull together the positive pairs while pushing apart the negative pairs. The second stage aims to find an expected prototype having the minimal distance to all samples within the same class. Particularly, label propagation is applied to make joint prediction for both labeled and unlabeled data. Then the labeled set is expanded by those pseudo-labeled samples, thereby forming a rectified prototype to perform nearest-neighbor classification better. Extensive experiments on standard benchmarks including NWPU-RESISC45, AID, and WHU-RS-19 demonstrate that our method works effectively and achieves the best performance that significantly outperforms many state-of-the-art approaches.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.