Abstract

Integrated reconstruction is crucial for 3D modeling urban scenes using multi-source images. However, large viewpoint and illumination variations pose challenges to existing solutions. A novel approach for accurate 3D reconstruction of complex urban scenes based on robust fusion of multi-source images is proposed. Firstly, georeferenced sparse models are reconstructed from the terrestrial and aerial images using GNSS-aided incremental SfM, respectively. Then, cross-platform match pairs are selected based on point-on-image observability. The terrestrial and aerial images are robustly matched based on the selected match pairs to generate cross-platform tie points. Thirdly, the tie points are triangulated to derive cross-platform 3D correspondences. The 3D correspondences are refined using a novel outlier detection method. Finally, the terrestrial and aerial sparse models are merged based on the refined correspondences, and the integrated model is globally optimized to obtain an accurate reconstruction of the scene. The proposed methodology is evaluated on five benchmark datasets, and extensive experiments are performed. The proposed pipeline is compared with a state-of-the-art methodology and three widely used software packages. Experimental results demonstrate that the proposed methodology outperforms the other pipelines in terms of robustness and accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call