Abstract

Abstract. Mesh models generated by multi view stereo (MVS) algorithms often fail to represent in an adequate manner the sharp, natural edge details of the scene. The harsh depth discontinuities of edge regions are eventually a challenging task for dense reconstruction, while vertex displacement during mesh refinement frequently leads to smoothed edges that do not coincide with the fine details of the scene. Meanwhile, 3D edges have been used for scene representation, particularly man-made built environments, which are dominated by regular planar and linear structures. Indeed, 3D edge detection and matching are commonly exploited either to constrain camera pose estimation, or to generate an abstract representation of the most salient parts of the scene, and even to support mesh reconstruction. In this work, we attempt to jointly use 3D edge extraction and MVS mesh generation to promote edge detail preservation in the final result. Salient 3D edges of the scene are reconstructed with state-of-the-art algorithms and integrated in the dense point cloud to be further used in order to support the mesh triangulation step. Experimental results on benchmark dataset sequences using metric and appearance-based measures are performed in order to evaluate our hypothesis.

Highlights

  • For a given set of images with known orientation parameters, typically as the output of the SfM, multiview stereo methods (MVS) generate a 3D dense point cloud, a triangulated mesh or a volume

  • More evident details are observed in the qualitative comparison of the mesh models, especially where accurate 3D edges were reconstructed (Figure 5)

  • In this paper we presented an approach to leverage edge information in the standard multi view stereo (MVS) mesh reconstruction pipeline

Read more

Summary

Introduction

For a given set of images with known orientation parameters (poses), typically as the output of the SfM, multiview stereo methods (MVS) generate a 3D dense point cloud, a triangulated mesh or a volume. The first method keeps the photometric consistency criterion active while wrapping the surface and providing a more refined mesh, while the second generates the optimal surface mesh out of a (typically dense) point cloud using i.e. Delaunay Triangulation or Poisson Surface Reconstruction (Kazhdan et al, 2006). Such algorithms are mature enough with impressive results, several challenges still exist towards the complete, accurate and detail-preserving 3D reconstruction of scenes. Edges have been used to support various photogrammetric and computer vision tasks such as image matching (Wang et al, 2009; Wang et al, 2021), camera localization (Hirose and Saito, 2012; Salaün et al, 2017; Miraldo et al, 2018), abstract 3D scene representation (Hofer et al, 2015; 2017), meshing sparse clouds (Bódis-Szomorú et al, 2015; Sugiura et al, 2015) as well as modelling and simplifying the scene (Langlois et al, 2019; Chen et al, 2020; Li and Nan, 2021)

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call