Abstract

Abstract. A dense point cloud with rich and realistic texture is generated from multiview images using dense reconstruction algorithms such as Multi View Stereo (MVS). However, its spatial precision depends on the performance of the matching and dense reconstruction algorithms used. Moreover, outliers are usually unavoidable as mismatching of image features. The lidar point cloud lacks texture but performs better spatial precision because it avoids computational errors. This paper proposes a multiresolution patch-based 3D dense reconstruction method based on integrating multiview images and the laser point cloud. A sparse point cloud is firstly generated with multiview images by Structure from Motion (SfM), and then registered with the laser point cloud to establish the mapping relationship between the laser point cloud and multiview images. The laser point cloud is reprojected to multiview images. The corresponding optimal level of the image pyramid is predicted by the distance distribution of projected pixels, which is used as the starting level for patch optimization during dense reconstruction. The laser point cloud is used as stable seed points for patch growth and expansion, and stored by the dynamic octree structure. Subsequently, the corresponding patches are optimized and expanded with the pyramid image to achieve multiscale and multiresolution dense reconstruction. In addition, the octree’s spatial index structure facilitates parallel computing with highly efficiency. The experimental results show that the proposed method is superior to the traditional MVS technology in terms of model accuracy and completeness, and have broad application prospects in high-precision 3D modeling of large scenes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call