Abstract

Change detection is a process of identifying changed ground objects by comparing image pairs obtained at different times. Compared with the pixel-level and object-level change detection, scene-level change detection can provide the semantic changes at image level, so it is important for many applications related to change descriptions and explanations such as urban functional area change monitoring. Automatic scene-level change detection approaches do not require ground truth used for training, making them more appealing in practical applications than nonautomatic methods. However, the existing automatic scene-level change detection methods only utilize low-level and mid-level features to extract changes between bitemporal images, failing to fully exploit the deep information. This article proposed a novel automatic binary scene-level change detection approach based on deep learning to address these issues. First, the pretrained VGG-16 and change vector analysis are adopted for scene-level direct predetection to produce a scene-level pseudo-change map. Second, pixel-level classification is implemented by using decision tree, and a pixel-level to scene-level conversion strategy is designed to generate the other scene-level pseudo-change map. Third, the scene-level training samples are obtained by fusing the two pseudo-change maps. Finally, the binary scene-level change map is produced by training a novel scene change detection triplet network (SCDTN). The proposed SCDTN integrates a late-fusion subnetwork and an early fusion subnetwork, comprehensively mining the deep information in each raw image as well as the temporal correlation between two raw images. Experiments were performed on a public dataset and a new challenging dataset, and the results demonstrated the effectiveness and superiority of the proposed approach

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call