Abstract

Change detection (CD) is a challenging task on high-resolution bitemporal remote sensing images. Many recent studies of CD have focused on designing fully convolutional Siamese network architectures. However, most of these methods initialize their encoders by random values or an ImageNet pretrained model, without any prior for the CD task, thus limiting the performance of the CD model. In this article, the novel supervised contrastive pretraining and fine-tuning CD (SCPFCD) framework, which is made up of two cascaded stages, is presented to train a CD network based on a pretrained encoder. In the first supervised contrastive pretraining stage, the encoder of the Siamese network is asked to solve a joint pretext task introduced by the proposed CDContrast pretraining method on labeled CD data. The proposed CDContrast pretraining method includes land contrastive learning (LCL), which is based on supervised contrastive learning, and proxy CD learning. The LCL focuses on learning the spatial relationships among the land cover from bitemporal images by solving a land contrast task, while the proxy CD learning performs a proxy CD task on the top of the upsampling projector to avoid local optima for the LCL and learn features for the CD. Then, in the second fine-tuning stage, the whole Siamese network initialized with the pretrained encoder is fine-tuned to perform the CD task in an end-to-end manner. The proposed SCPFCD framework was verified with three CD datasets of high-resolution remote sensing images. The extensive experimental results consistently show that the proposed framework can effectively improve the ability to extract change information for Siamese networks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call