Abstract

Unsupervised domain adaptation (UDA) is an important solution to reduce the bias between the labeled source domain and the unlabeled target domain. It has attracted more attention for optical remote sensing image scene classification and retrieval. Currently, most of the previous work is devoted to closed-set UDA. In fact, the target domain often contains unknown classes. Moreover, some open UDA methods mine structural information of the target domain directly from the type knowledge of the source domain, and less directly from the unlabeled data of the target domain. In this paper, we propose a new self-supervised-driven open-set UDA method combining contrastive self-supervised learning with consistency self-training for optical remote sensing scene classification and retrieval. Specifically, a contrastive self-supervised learning network is introduced to learn discriminative features from the unlabeled target domain data. Moreover, a novel open-set class learning module is developed based on two-level confidence rules and the consistency self-training strategy, which can obtain reliable unknown class samples for co-training. Finally, an open-set dataset including six cross-domain scenarios is constructed based on three public datasets and several experiments are conducted with eleven state-of-the-art domain adaptation methods. Experimental results demonstrate that our proposed method achieves superior performances on the six open-set cross-domain scenarios in both scene classification and retrieval. Especially, our method improves the overall classification accuracies by 9.72% to 24.06% and improve mean average retrieval precisions by 8.06% to 16.21% on the complex UCMD (source domain) → NWPU (target domain) scenario, compared with the other eleven state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call