Abstract

Recently, there has been a great demand for 3D building models in several applications including cartography and planning applications in urban areas. This led to the development of automated algorithms to extract such models since they reduce the time and cost when compared to manually onscreen digitizing. Most algorithms are built to solve the proposed problem from either LiDAR datasets or aerial imageries. Since both datasets have their weaknesses, integrating these datasets has the potential to be more successful in 3D modeling since the limitations of each source can be fulfilled by the other. In this article, we outline an algorithm that generates 3D building wireframes from LiDAR DEMs and high-resolution aerial images. Each post in the DEM is assigned five different attributes representing the intensity and elevations in its neighborhood. Posts are then classified as ground or non-ground using a feedforward back-propagation neural network. Non-ground points are grouped into different planer patches using Hough transformation, and these patches are iteratively refined using the L1-norm blunder detector and a region-growing segmentation algorithm. Finally, topological relationships among roof planes and boundary points are satisfied through regression analyses. The algorithm is tested on a number of buildings with complex rooftops, and results show its promising precision and completeness in modeling various building shapes.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call