Abstract

3D recovery from multi-stereo and stereo images, as an important application of the image-based perspective geometry, serves many applications in computer vision, remote sensing, and Geomatics. In this chapter, the authors utilize the imaging geometry and present approaches that perform 3D reconstruction from cross-view images that are drastically different in their viewpoints. We introduce our project work that takes ground-view images and satellite images for full 3D recovery, which includes necessary methods in satellite and ground-based point cloud generation from images, 3D data co-registration, fusion, and mesh generation. We demonstrate our proposed framework on a dataset consisting of twelve satellite images and 150 k video frames acquired through a vehicle-mounted Go-pro camera and demonstrate the reconstruction results. We have also compared our results with results generated from an intuitive processing pipeline that involves typical geo-registration and meshing methods.

Highlights

  • Recent Advances in Image Restoration with Applications to Real World ProblemsThe available commercial satellite images often have 0.3–0.5 m GSD and ground-view images can reach a GSD of a few millimeters

  • The resulting 3D geometry may be associated with different uncertainties, which adds additional challenges for the fusion task of these two types of data, which include: 1. The quality of 3D output separately generated from satellite images and ground-view images are scene-specific and may differ in terms of completeness and accuracy

  • We introduce in our proposed method major contributions to address the abovementioned challenges to form a complete fusion pipeline. These contributions are: (1) we introduce a monocular video-frame-based 3D reconstruction pipeline to achieve the minimal geometric distortion by leveraging the speed and accuracy in a photogrammetric reconstruction pipeline called MetricSFM. (2) We introduce a cross-view geo-registration and fusion algorithm that takes point clouds generated from satellite multi-view stereo (MVS) images and from ground-view videos, to coregister the ground-view point clouds to the overview point clouds; (3) we extend a view-based meshing approach to accommodate point clouds with images coming from different cameras

Read more

Summary

Introduction

The available commercial satellite images often have 0.3–0.5 m GSD (ground sampling distance) and ground-view images can reach a GSD of a few millimeters. Algorithms and basic principles for addressing image-based 3D modeling are relative standard, the image quality and their respective characteristics play a major role in the reconstruction results, such as the photo-consistency/temporal differences/illumination among images, their geometric setup, completeness in terms of coverage, intersection angles, etc. We introduce in our proposed method major contributions to address the abovementioned challenges to form a complete fusion pipeline These contributions are: (1) we introduce a monocular video-frame-based 3D reconstruction pipeline to achieve the minimal geometric distortion by leveraging the speed and accuracy in a photogrammetric reconstruction pipeline called MetricSFM. (2) We introduce a cross-view geo-registration and fusion algorithm that takes point clouds generated from satellite multi-view stereo (MVS) images and from ground-view videos, to coregister the ground-view point clouds to the overview point clouds; (3) we extend a view-based meshing approach to accommodate point clouds with images coming from different cameras. The rest of this chapter is organized as follows: Section 2 introduces related works and the overview of the proposed pipeline; Section 3 introduces our methodologies of the components of the pipeline in details, Section 4 describes the experiment dataset and the results of the 3D reconstruction; and Section 5 concludes this chapter by discussing potential works moving forward

Related works and an overview of the proposed pipeline
Multi-view (MVS) satellite image processing
A 3D reconstruction pipeline
Cross-view 3D point co-registration and fusion
Building boundary extraction from ground-view and over-view point clouds
Individual building segment matching
Global optimization for consistent building segment matching using graph-cut
Data term
Smooth term
Bundle adjustment for pose refinement
Mesh reconstruction of cross-view fused point cloud
Texture mapping of cross-view fused point cloud
Data description
Experiment results
Accuracy evaluation
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.