Abstract

Robustness of aerial-ground multi-source image matching is closely related to the quality of the ground reference image. To explore the influence of reference images on the performance of air-ground multi-source image matching, we focused on the impact of the control point projection accuracy and tie point accuracy on bundle adjustment results for generating digital orthophoto images by using the Structure from Motion algorithm and Monte Carlo analysis. Additionally, we developed a method to learn local deep features in natural environments based on fine-tuning the pre-trained ResNet50 model and used the method to match multi-scale, multi-seasonal, and multi-viewpoint air-ground multi-source images. The results show that the proposed method could yield a relatively even distribution of feature corresponding points under different conditions, seasons, viewpoints, illuminations. Compared with state-of-the-art hand-crafted computer vision and deep learning matching methods, the proposed method demonstrated more efficient and robust matching performance that could be applied to a variety of unmanned aerial vehicle self- and target-positioning applications in GPS-denied areas.

Highlights

  • Air-ground multi-source image matching is the process of finding corresponding points between two images taken of the same scene but under different sensors, viewpoint, time, and weather conditions [1]

  • Matching Results Based on unmanned aerial vehicles (UAVs) Reference Image

  • Experimental area digital orthophoto map (DOM) made using UAV images taken in winter was used as the reference image and its deep features were extracted

Read more

Summary

Introduction

Air-ground multi-source image matching is the process of finding corresponding points between two images taken of the same scene but under different sensors, viewpoint, time, and weather conditions [1]. Air-ground image matching aims to find robust features of images acquired by UAVs that are consistent with a previous reference image. The key to successful matching is an appropriate matching strategy, making use of all available and explicit knowledge concerning the sensor model, network structure, and image content. In the multi-source UAV image acquisition phase, differences in resolution, viewpoint, scale, sensor model, and illumination conditions will lead to feature confusion and object occlusion problems in the images. Images collected at different times may show changes to the number or presence of objects

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call