ABSTRACT Remote sensing image change detection has significant applications across various fields. In recent years, the powerful feature extraction capability of deep learning technology has introduced innovative approaches to change detection methodologies. However, the accuracy of convolutional neural network algorithms hinges heavily on labelled data, necessitating substantial manpower and resources to generate labelled samples. Therefore, this paper proposes an unsupervised change detection model called CIUCD. This model operates on the fundamental premise of unsupervised change detection, employing a generator and a segmenter for iterative training. During each iteration, the generator and segmenter update and optimize their parameters, with the segmenter leveraging the generated change images as labels for training purposes. This iterative labelling process circumvents the need for manually labelled data, thereby mitigating the difficulty of obtaining change detection labels for actual remote sensing images. To validate the efficacy of the proposed model, we conducted experiments using actual remotely sensed images from the 2023 Turkey earthquake and the 2022 Afghanistan earthquake events. The results show that our method enhances the F1, Pr, Re and OA by 5.98%, 2.05%, 11.82% and 8.50% (Turkey earthquake), and by 9.54%, 5.87%, 4.52% and 11.34% (Afghanistan earthquake), respectively, when compared to the unsupervised change detection algorithm KPCA-Mnet. Experiments on real remote sensing images of 1000 × 1000 pixels showed that the CIUCD algorithm not only accurately detect larger change areas but is also capable of identifying changes in subtle objects such as solar panels. Additionally, the CIUCD algorithm is resilient to natural environmental factors like the angle of illumination, enabling precise predictions of changes in the actual remote sensing images and yielding clearer detection results. The performance of the CIUCD algorithm remains outstanding even when applied to larger images with a resolution of 3000 × 3000 pixels.
Read full abstract