Abstract

Recent advances in deep learning have dramatically improved the performance of content-based remote sensing image retrieval (CBRSIR) with the same distribution of training set (source domain) and test set (target domain). In fact, their distributions are inconsistent in most cases, which can lead to a dramatic decrease in retrieval performance. Currently, some unsupervised domain adaptation (DA) methods for other remote sensing applications have been proposed to eliminate the inconsistency. However, the current unsupervised DA methods do not make full use of the target domain’s distribution characteristics when delineating its decision boundary. This tends to degrade the cross-domain retrieval performance. In this article, a pseudo-label consistency learning-based unsupervised DA method (PCLUDA) is proposed for cross-domain CBRSIR. Our PCLUDA method minimizes the difference in probability distribution between the target domain and its perturbed output by a pseudo-label self-training and consistency regularization strategy, followed by adjusting the target domain’s decision boundaries to the low-density region. Besides, minimize class confusion (MCC) is introduced to reduce negative transfer caused by large intraclass variance of RSIs. Two cross-domain datasets with 12 cross-domain scenarios are constructed based on six open access datasets to measure DA methods. Experimental results show that our PCLUDA method achieves superior retrieval performances with average retrieval precision improvement by 4.9%–32.3% compared with eight state-of-the-art DA approaches in complex cross-domain scenarios. Furthermore, other experimental results indicate that our PCLUDA can also reach optimal retrieval performances in different kinds of deep learning networks [i.e., vision transformer (ViT) and convolutional neural networks (CNNs)].

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call