Abstract

Change detection is the discovery of changes in remote sensing images of the same region obtained at different times. Change detection algorithms based on deep neural networks have significant advantages over traditional algorithms on high-resolution images. State-of-the-art (SOTA) change detection methods require sufficient labeled data to achieve good results, but semantic change detection requires not only binary change masks but also “from-to” change information, so large quantities of change labels are difficult to obtain. Achieving better semantic change detection accuracy with a limited number of labels remains an open problem in the remote sensing field. In this paper, we propose a feature-guided multitask change detection network (MCDnet). Feature guidance is characterized by three steps: 1) a multitask learning network that uses Siamese encoders to learn segmentation and change detection features simultaneously to realize mutual guidance between tasks is designed, 2) a fine-grained feature fusion module to integrate and enhance change information under the guidance of symmetrical change features is constructed, and 3) a contrastive loss function based on the a priori knowledge that the features of the changed regions are different while those of the unchanged regions are the same is proposed. The experimental results show that MCDnet achieves SOTA results on three public change detection datasets, including WHU-CD (F1: 94.46\IoU: 89.50), LEVIR (F1: 92.11\IoU: 85.37) and SECOND (mIoU: 73.1\Sek: 22.8). In addition, it is surprising that MCDnet is comparable to the SOTA models while using only 20% of the full training data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call