Abstract

Building change detection from very high-spatial-resolution (VHR) remote sensing images has gained increasing popularity in a variety of applications, such as urban planning and damage assessment. Detecting fine-grained “from–to” changes (change transition from one land cover type to another) of buildings from the VHR images is still challenging as multitemporal representation is complicated. Recently, fully convolutional neural networks (FCNs) have been proven to be capable of feature extraction and semantic segmentation of VHR images, but its ability in change detection is untested and unknown. In this letter, we leverage the semantic segmentation of buildings as an auxiliary source of information for the fine-grained “from–to” change detection. A deep multitask learning framework for change detection (MTL-CD) is proposed for detecting building changes from the VHR images. MTL-CD adopts the encoder–decoder architecture and solves the main task of change detection and the auxiliary tasks of semantic segmentation simultaneously. Accordingly, the change detection loss function is constrained by the auxiliary semantic segmentation tasks and enables the back-propagation of the building footprints’ detection errors for the improvement of change detection. A building change detection data set named the Guangzhou data set is also developed for model evaluation, in which the bitemporal R–G–B images were collected by airplane (2009) and unmanned aerial vehicle (UAV, 2019) with different flight heights. Experiments on the Guangzhou data set demonstrate that the MTL-CD method effectively detects fine-grained “from–to” changes and outperforms the postclassification methods and the direct change detection methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call