Abstract

The reconstruction of urban buildings from large-scale airborne laser scanning point clouds is an important research topic in the geoscience field. Large-scale urban scenes usually contain a large number of object categories and many overlapped or closely neighboring objects, which poses great challenges for classifying and modeling buildings from these data sets. In this paper, we propose a deep reinforcement learning framework that integrates a 3-D convolutional neural network, a deep Q-network, and a residual recurrent neural network for the efficient semantic parsing of large-scale 3-D point clouds. The proposed framework provides an end-to-end automatic processing method that maps the raw point cloud to the classification results of the given categories. After obtaining the building classes, we utilize an edge-aware resampling algorithm to consolidate the point set with noise-free normals and clean preservation of sharp features. Finally, 2.5-D dual contouring, which is a data-driven approach, is introduced to generate urban building models from the consolidated point clouds. Our method can generate lightweight building models with arbitrarily shaped roofs while preserving the verticality of connecting walls.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.