Abstract

The large amount of unlabeled remote sensing acquired from different sources and at different times (defined as multiple views in this article) presents both an opportunity and a challenge for change detection. Recently, many generative model-based methods have been proposed for remote sensing image change detection on such unlabeled data. However, the high diversities in the learned features weaken the discrimination of the relevant change indicators in unsupervised change detection tasks. Moreover, these methods lack research on massive archived images. In this work, a self-supervised change detection approach based on an unlabeled multiview setting is proposed to overcome this limitation. This is achieved by the use of a multiview contrastive loss in the feature alignment between multiview images. In this approach, a pseudo-Siamese network is trained to regress the output between its two branches pretrained in a contrastive way on a large dataset of single-sensor or cross-sensor image pairs. Finally, the feature distance between the outputs of the two branches is used to define a change measure, which can be analyzed by thresholding to get the final binary change map. Experiments are carried out on two single-sensor and three cross-sensor datasets. The proposed approach is compared with other supervised and unsupervised state-of-the-art change detection methods. Results demonstrate both improvements over state-of-the-art unsupervised methods and the proposed approach narrows the gap between unsupervised and supervised change detection.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call