Abstract

3D building models are important for many applications related to human activities in urban environments. However, due to the high complexity of the building structures, it is still difficult to automatically reconstruct building models with accurate geometric description and semantic information. To simplify this problem, this article proposes a novel approach to automatically decompose the compound buildings with symmetric roofs into semantic primitives by exploiting local symmetry contained in the building structure. In this approach, the proposed decomposition allows the overlapping of neighbor primitives and each decomposed primitive can be represented as a parametric form, which simplify the complexity of the building reconstruction and facilitate the integration of LiDAR data and aerial imagery into a parameters optimization process. The proposed method starts by extracting isolated building regions from the LiDAR point clouds. Next, point clouds belonging to each compound building are segmented into planar patches to construct an attributed graph, and then the local symmetries contained in the attributed graph are exploited to automatically decompose the compound buildings into different semantic primitives. In the final step, 2D image features are extracted depending on the initial 3D primitives generated from LiDAR data, and then the compound building is reconstructed using constraints from LiDAR data and aerial imagery by a nonlinear least squares optimization. The proposed method is applied to two datasets with different point densities to show that the complexity of building reconstruction can be reduced considerably by decomposing the compound buildings into semantic primitives. The experimental results also demonstrate that the traditional model driven methods can be further extended to the automated reconstruction of compound buildings by using the proposed semantic decomposition method.

Highlights

  • Two fundamentally different methods have been used for automated 3D building reconstruction: data-driven methods and model-driven methods

  • It is usually constituted by two symmetric planar patches. It is a local symmetry characteristic implicitly contained in the building structure. By exploiting this knowledge about local symmetry, we present an automatic decomposition algorithm to decompose the compound buildings into semantic primitives

  • The “Vaihingen” dataset was provided by the ISPRS test project on urban classification and 3D building reconstruction [50], which was acquired by the Leica ALS50 system with an average point density of 4 points/m2 at a mean flying height of 500 m above ground level

Read more

Summary

Introduction

Two fundamentally different methods have been used for automated 3D building reconstruction: data-driven methods and model-driven methods. For data-driven methods, a common assumption is that buildings have a polyhedral form, i.e., buildings only have planar roofs These methods usually start with the extraction of planar patches from LiDAR point clouds using segmentation algorithms, such as region growing [3], random sample consensus [4], 3D Hough transform [5], and clustering methods [6,7]. If there are missing features in the data, the modeling process may be hampered and the corresponding object structure may be visually deformed In this case, aerial imagery can serve as a complementary data source to accurately generate building models due to its high resolution. A variety of methods using both LiDAR data and optical imagery for building reconstruction have been proposed in the last ten years [10,11,12,13,14,15,16,17,18,19]

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call