Abstract

Abstract. Image-based 3D modelling are rather mature nowadays with well-acquired images through standard photogrammetric processing pipeline, while fusing 3D dataset generated from images with different views for surface reconstruction remains to be a challenge. Meshing algorithms for image-based 3D dataset requires visibility information for surfaces and such information can be difficult to obtain for 3D point clouds generated from images with different views, sources, resolutions and uncertainties. In this paper, we propose a novel multi-source mesh reconstruction and texture mapping pipeline optimized to address such a challenge. Our key contributions are 1) we extended state-of-the-art image-based surface reconstruction method by incorporating geometric information produced by satellite images to create wide-area surface model. 2) We extended a texture mapping method to accommodate images acquired from different sensors, i.e. side-view perspective images and satellite images. Experiments show that our method creates conforming surface model from these two sources, as well as consistent and well-balanced textures from images with drastically different radiometry (satellite images vs. street-view level images). We compared our proposed pipeline with a typical fusion pipeline - Poisson reconstruction and the results show that our pipeline shows distinctive advantages.

Highlights

  • 1.1 Introduction*Surface models through meshing point clouds are important presentations in Geomatics and Computer Graphics community

  • In this paper, taking 3D point clouds generated from satellite images, and 3D point clouds produced by an SFM pipeline, we propose a method to reconstruction textured meshes from such data, in our method, 1

  • We extend a texture mapping method to accommodate images acquired from different sensors, i.e. side-view perspective images and satellite images

Read more

Summary

Introduction*

Surface models through meshing point clouds are important presentations in Geomatics and Computer Graphics community. Data from different sources are often processed separately using well-developed approaches to generate point clouds, and apply a point cloud based meshing algorithms (Kazhdan et al, 2006), followed by a post texture mapping using available oriented images. It is known that meshing from image-based point clouds often utilizes the visibility information that codes for each point, which image (if available) observes it, as such information implicitly provides the surface information with respect to the camera positions. Meshing from the level of combined point clouds (from different sources) are likely to miss the visibility information, because on one hand, most of the image-based 3D modeling software packages do not output the visibility information, and on the other hand, when multiple

Related Works
Data Description
Visibility
Assigning Weights for the Graph
The Proposed Multi-source Texture Mapping Pipeline
Best-View selection
EVALUATION
Seamless Texture Fusion
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.