Abstract

Large radiometric and geometric distortion in multi-source images leads to fewer matching points with high matching blunder ratios, and global geometric relationship models between multi-sensor images are inexplicit. Thus, traditional matching blunder detection methods cannot work effectively. To address this problem, we propose two matching blunder detection methods based on graph theory. The proposed methods can build statistically significant clusters in the case of few matching points with high matching blunder ratios, and use local geometric similarity constraints to detect matching blunders when the global geometric relationship is not explicit. The first method (named the complete graph-based method) uses clusters constructed by matched triangles in complete graphs to encode the local geometric similarity of images, and it can detect matching blunders effectively without considering the global geometric relationship. The second method uses the triangular irregular network (TIN) graph to approximate a complete graph to reduce to computational complexity of the first method. We name this the TIN graph-based method. Experiments show that the two graph-based methods outperform the classical random sample consensus (RANSAC)-based method in recognition rate, false rate, number of remaining matching point pairs, dispersion, positional accuracy in simulated and real data (image pairs from Gaofen1, near infrared ray of Gaofen1, Gaofen2, panchromatic Landsat, Ziyuan3, Jilin1and unmanned aerial vehicle). Notably, in most cases, the mean false rates of RANSAC, the complete graph-based method and the TIN graph-based method in simulated data experiments are 0.50, 0.26 and 0.14, respectively. In addition, the mean positional accuracy (RMSE measured in units of pixels) of the three methods is 2.6, 1.4 and 1.5 in real data experiments, respectively. Furthermore, when matching blunder ratio is no higher than 50%, the computation time of the TIN graph-based method is nearly equal to that of the RANSAC-based method, and roughly 2 to 40 times less than that of the complete graph-based method.

Highlights

  • Obtaining stable matching points between images is crucial for the accuracy and efficiency of current applications of advanced photogrammetric and computer vision methods

  • Experiments show that the two graph-based methods outperform the classical random sample consensus (RANSAC)-based method in recognition rate, false rate, number of remaining matching point pairs, dispersion, positional accuracy in simulated and real data

  • When matching blunder ratio is no higher than 50%, the computation time of the triangular irregular network (TIN) graph-based method is nearly equal to that of the RANSAC-based method, and roughly 2 to 40 times less than that of the complete graph-based method

Read more

Summary

Introduction

Obtaining stable matching points between images is crucial for the accuracy and efficiency of current applications of advanced photogrammetric and computer vision methods. Due to the large time span, big difference in shooting angles, and different platforms and sensors, there is large radiometric and geometric distortion among multi-source images. The matching blunders presented in multi-sensor images have two distinct features. There are fewer matching points with higher matching blunder ratios. Matching blunder ratio represents the ratio of number of matching blunders to the number of whole matching point pairs. It is difficult to derive the transformation models between images from different sensors, that is, the global geometric relationship models for matching blunder detection in multi-source images cannot be obtained

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.