Abstract

The accelerated urban development over the last decades has made it necessary to update spatial information rapidly and constantly. Therefore, cities’ three-dimensional models have been widely used as a study base for various urban problems. However, although many efforts have been made to develop new building extraction methods, reliable and automatic extraction is still a major challenge for the remote sensing and computer vision communities, mainly due to the complexity and variability of urban scenes. This paper presents a method to extract building roof boundaries in the object space by integrating a high-resolution aerial images stereo pair, three-dimensional roof models reconstructed from light detection and ranging (LiDAR) data, and contextual information of the scenes involved. The proposed method focuses on overcoming three types of common problems that can disturb the automatic roof extraction in the urban environment: perspective occlusions caused by high buildings, occlusions caused by vegetation covering the roof, and shadows that are adjacent to the roofs, which can be misinterpreted as roof edges. For this, an improved Snake-based mathematical model is developed considering the radiometric and geometric properties of roofs to represent the roof boundary in the image space. A new approach for calculating the corner response and a shadow compensation factor was added to the model. The created model is then adapted to represent the boundaries in the object space considering a stereo pair of aerial images. Finally, the optimal polyline, representing a selected roof boundary, is obtained by optimizing the proposed Snake-based model using a dynamic programming (DP) approach considering the contextual information of the scene. The results showed that the proposed method works properly in boundary extraction of roofs with occlusion and shadows areas, presenting completeness and correctness average values above 90%, RMSE average values below 0.5 m for E and N components, and below 1 m for H component.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.