Abstract

Real-time 3D reconstruction of static scenes can be achieved based on the RGB-D image sequence fusion. It is a popular practice to divide the space into uniform voxels and use a truncated signed distance function to represent surface information. In order to represent a scene of large scale, the voxel hash algorithm which stores voxels compressively can be used, but most of the conventional methods do not consider the complexity and roughness of the object surface in the scene, so the scene is represented with a uniform resolution. It somewhat limits the range of scene representation and the speed of real-time reconstruction. In this paper, a large-scale scene reconstruction algorithm based on voxel hashing storage with LOD representation is proposed. The main contributions include the following two aspects: (1) By preprocessing the depth image with smooth filtering, which ensures the accuracy of the data, it can effectively reduce the distortion caused by the sensor itself and violent motion and provide better support for the stages of voxel hashing, model rendering, and frame-to-model camera position tracking. (2) The 3D reconstruction with LOD representation is realized. We take the view distance and the roughness of the model surface as criteria to control the adaptive division and representation of spatial voxel blocks. Finally, we carried out qualitative and quantitative evaluations of the algorithm, and confirmed that the algorithm can achieve real-time reconstruction with different levels of detail in the commercial graphics processing hardware environment, and achieve a good fusion effect in large-scale scenes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call