Abstract

The LiDAR and photogrammetric point clouds fusion procedure for building extraction according to U-Net deep learning model segmentation is provided and tested. Firstly, an initial geo-localization process is performed for photogrammetric point clouds generated using structure-from-motion and dense-matching methods. Then, point cloud segmentation is carried out based on U-Net deep learning model. The precision of the U-Net model for buildings extraction reachs 87%, with F-score of 0.89 and IoU of 0.80. It is shown that the U-Net method is effective for high-resolution image extraction. The detailed information can accurately be identified and extracted, such as vegetation located between buildings and roads. After segmentation, each chunk of the LiDAR and photogrammetric point clouds are finely registered and merged based on the iterative closest point algorithm. Finally, the fused point clouds are obtained. It shows that the structure and shape of the buildings could be delineated from the fused point clouds when both enough ground points and a higher point density are available. Furthermore, color information improves both visualization effect and properties identification. The experiments are conducted to extract individual buildings from three types of point clouds in three plots. A DoN (Difference of Normals) approach is used to isolate 3D buildings from other objects in densely built-up areas. It shows that most building extraction results have a Precision > 0.9 and favorable Recall and F-score values. Although the LiDAR extraction results have some advantages over the photogrammetric and fused ones in terms of Precision, the Recall and F-score results appear best for the fused point clouds. It shows that the fused data contains a high point density and RGB color information and could improve the building extraction.

Highlights

  • The extraction and identification of 3D urban buildings have become a crucial issue in many applications, such as urban building database updating, city management, disaster assessment, digital mapping, transportation planning, cadastral, and telecommunication network management [1, 2]

  • An urban building map from U-Net segmentation was utilized for point cloud segmentation

  • The U-Net convolutional neural network model was applied for image segmentation

Read more

Summary

Introduction

The extraction and identification of 3D urban buildings have become a crucial issue in many applications, such as urban building database updating, city management, disaster assessment, digital mapping, transportation planning, cadastral, and telecommunication network management [1, 2]. The technologies such as remote sensing, computer vision, and machine learning have provided opportunities and prospects for building automation extraction. Research shows that the deep learning algorithm effectively solves the problems of complex high-resolution image building extraction. It is of great significance for highprecision urban ecological environment monitor. It is still challenging to reach a satisfactory effect in the dense building extraction because there are usually obstructions from surrounding buildings and high trees. It is virtually unavoidable, especially in very high-spatial resolution remote sensing images. It is almost impossible to extract 3D building information from images

Methods
Findings
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call