Abstract

To solve problems in 3D modelling, including unclear 3D edges in point clouds, lack of geometric semantics, confusing topological relations between 3D models, and low degree of automation in traditional model-driven 3D modelling, a method for high-accuracy and automatic monomer 3D modelling must be developed. This paper is the first to propose a 3D modelling strategy, called combining data-and-model-driven 3D modelling (CDMD3DM), for small regular objects in indoor scenes using RGB-D data. The proposed method’s workflow is as follows: generation of initial 3D point cloud data using data-driven Kinect v2; segmentation of point cloud data, based on deep learning, that improves the accuracy and automation of geometric model recognition; definition of the initial model-driven parameters based on the instance segmentation results; optimization of geometric model parameters, based on generalized point photogrammetry theory, to generate monomer models in indoor scenes to overcome the shortcomings of confusing topological relationships and inaccurate 3D model edges; and finally, fusion of the results of data-driven and model-driven 3D modelling. The experimental results demonstrate that CDMD3DM is feasible, automatic, more accurate, more reliable, semantically richer and capable of producing clearer topological relationships than current 3D modelling results using indoor RGB-D data. These outcomes promote interdisciplinary integration between computer vision and photogrammetry.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call