Abstract

To produce highly detailed 3D models of architectural scenes, both aerial and terrestrial images are usually captured. However, due to the different viewpoints of each set of images, visual entities in cross-view images show dramatic changes. The perspective distortion makes it difficult to obtain correspondences between aerial–terrestrial image pairs. To solve this problem, a tie point matching method based on variational patch refinement is proposed. First, aero triangulation is performed on aerial images and terrestrial images, respectively; then, patches are created based on sparse point clouds. Second, the patches are optimized to be close to the surface of the object by variational patch refinement. The perspective distortion and scale difference of the terrestrial and aerial images projected onto the patches are reduced. Finally, tie points between aerial and terrestrial images can be obtained through patch-based matching. Experimental evaluations using four datasets from the ISPRS benchmark datasets and Shandong University of Science and Technology datasets reveal the satisfactory performance of the proposed method in terrestrial–aerial image matching. However, matching time is increased, because point clouds need to be generated. Occlusion in an image, such as that caused by a tree, can influence the generation of point clouds. Therefore, future research directions include the optimization of time complexity and the processing of occluded images.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call