Abstract

Reconstructing 3D point cloud models from image sequences tends to be impacted by illumination variations and textureless cases in images, resulting in missing parts or uneven distribution of retrieved points. To improve the reconstructing completeness, this work proposes an enhanced similarity metric which is robust to illumination variations among images during the dense diffusions to push the seed-and-expand reconstructing scheme to a further extent. This metric integrates the zero-mean normalized cross-correlation coefficient of illumination and that of texture information which respectively weakens the influence of illumination variations and textureless cases. Incorporated with disparity gradient and confidence constraints, the candidate image features are diffused to their neighborhoods for dense 3D points recovering. We illustrate the two-phase results of multiple datasets and evaluate the robustness of proposed algorithm to illumination variations. Experiments show that ours recovers 10.0% more points, on average, than comparing methods in illumination varying scenarios and achieves better completeness with comparative accuracy.

Highlights

  • Incorporated with disparity gradient and confidence constraints, the candidate image features are diffused to their neighborhoods for dense 3D points recovering

  • The work has the following merits: (1) We propose an enhanced similarity metric for image-based 3D reconstruction algorithm to promote the quality of retrieved point clouds

  • It is observed that the feature diffusion phase constructs the basic structures of photographed models; the points are not dense and there are missing parts in the recovered point cloud models, such as the back and feet of “Dino16”, the stairs and pillars of “Temple16”, and the plain walls of “Fountain25”

Read more

Summary

Introduction

Incorporated with disparity gradient and confidence constraints, the candidate image features are diffused to their neighborhoods for dense 3D points recovering. To obtain dense point cloud model with richer details, a multi-view stereo scheme, seed-and-expand, is widely employed, which takes sparse seed feature matches as input and propagates them to the pixel neighborhoods, restores the 3D points by stereo mapping using estimated camera parameters [6]. This scheme uses sparse features or points as seeds to build dense point cloud recursively and adaptively, such as PMVS [7] and VisualSfM [8]. Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call