Abstract

Due to the advantages of 3D point clouds over 2D optical images, the related researches on scene understanding in 3D point clouds have been increasingly attracting wide attention from academy and industry. However, many 3D scene understanding methods largely require abundant supervised information for training a data-driven model. The acquisition of such supervised information relies on manual annotations which are laborious and arduous. Therefore, to mitigate such manual efforts for annotating training samples, this paper studies a unified neural network to segment 3D objects out of point clouds interactively. Particularly, to improve the segmentation performance on the accurate object segmentation, the boundary information of 3D objects in point clouds are encoded as a boundary energy term in the Markov Random Field (MRF) model. Moreover, the MRF model with the boundary energy term is naturally integrated with the Graphical Neural Network (GNN) to obtain a compact representation for generating the boundary-preserved 3D objects. The proposed method is evaluated on two point clouds datasets obtained from different types of laser scanning systems, i.e. terrestrial laser scanning system and mobile laser scanning system. Comparative experiments show that the proposed method is superior and effective in 3D objects segmentation in different point-cloud scenarios.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.