Abstract

Abstract. Building Change Detection (BCD) via multi-temporal remote sensing images is essential for various applications such as urban monitoring, urban planning, and disaster assessment. However, most building change detection approaches only extract features from different kinds of remote sensing images for change index determination, which can not determine the insignificant changes of small buildings. Given co-registered multi-temporal remote sensing images, the illumination variations and misregistration errors always lead to inaccurate change detection results. This study investigates the applicability of multi-feature fusion from both directly extract 2D features from remote sensing images and 3D features extracted by the dense image matching (DIM) generated 3D point cloud for accurate building change index generation. This paper introduces a graph neural network (GNN) based end-to-end learning framework for building change detection. The proposed framework includes feature extraction, feature fusion, and change index prediction. It starts with a pre-trained VGG-16 network as a backend and uses U-net architecture with five layers for feature map extraction. The extracted 2D features and 3D features are utilized as input into GNN based feature fusion parts. In the GNN parts, we introduce a flexible context aggregation mechanism based on attention to address the illumination variations and misregistration errors, enabling the framework to reason about the image-based texture information and depth information introduced by DIM generated 3D point cloud jointly. After that, the GNN generated affinity matrix is utilized for change index determination through a Hungarian algorithm. The experiment conducted on a dataset that covered Setagaya-Ku, Tokyo area, shows that the proposed method generated change map achieved the precision of 0.762 and the F1-score of 0.68 at pixel-level. Compared to traditional image-based change detection methods, our approach learns prior over geometrical structure information from the real 3D world, which robust to the misregistration errors. Compared to CNN based methods, the proposed method learns to fuse 2D and 3D features together to represent more comprehensive information for building change index determination. The experimental comparison results demonstrated that the proposed approach outperforms the traditional methods and CNN based methods.

Highlights

  • Building Change Detection (BCD) via multi-temporal remote sensing images is essential for various applications such as urban monitoring, urban planning, and disaster assessment

  • In this paper, we present a 2D and 3D combined BCD method based on multifeature fusion via graph neural network

  • We show the selected area of our proposed change detection results with the ground truth

Read more

Summary

Introduction

Building Change Detection (BCD) via multi-temporal remote sensing images is essential for various applications such as urban monitoring, urban planning, and disaster assessment. Automated BCD technology has been a hot topic in the remote sensing field in recent years and has accelerated the development of actual industrial production and applications (Xiao, 2017). In the past few decades, a lot of BCD methods have been proposed. Based on the difference of input data, BCD methods can be categorized as 2D BCD and 3D BCD. Traditional 2D BCD methods utilize the image information for building segmentation and conduct the post-classification methods for change detection. As the development of the sensor, the remotely sensed image has reached a finer level. Pixels in very high resolution images contain more detailed information, making the BCD results more sensitive to the pixel-based comparison (Liu, 2020)

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call