Abstract

Abstract. Image matching is a fundamental issue of multimodal images fusion. Most of recent researches only focus on the non-linear radiometric distortion on coarsely registered multimodal images. The global geometric distortion between images should be eliminated based on prior information (e.g. direct geo-referencing information and ground sample distance) before using these methods to find correspondences. However, the prior information is not always available or accurate enough. In this case, users have to select some ground control points manually to do image registration and make the methods work. Otherwise, these methods will fail. To overcome this problem, we propose a robust deep learning-based multimodal image matching method that can deal with geometric and non-linear radiometric distortion simultaneously by exploiting deep feature maps. It is observed in our study that some of the deep feature maps have similar grayscale distribution and correspondences can be found from these maps using traditional geometric distortion robust matching methods even significant non-linear radiometric difference exists between the original images. Therefore, we can only focus on the geometric distortion when we deal with deep feature maps, and then only focus on non-linear radiometric distortion in patches similarity measurement. The experimental results demonstrate that the proposed method performs better than the state-of-the-art matching methods on multimodal images with both geometric and non-linear radiometric distortion.

Highlights

  • Multimodal images reflect different characteristics and information of the observed objects because of the difference of sensor imaging mechanism

  • We propose a geometric and non-linear radiometric distortion robust multimodal image matching method in the framework of convolutional neural network by exploiting deep feature maps

  • In order to demonstrate the effectiveness of the proposed method, we compare it with both handcraft and deep learningbased methods

Read more

Summary

Introduction

Multimodal images reflect different characteristics and information of the observed objects because of the difference of sensor imaging mechanism. The normalized cross correlation (NCC) and mutual information (MI) are robust to radiometric changes to some extent (Chen et al, 2003; Hel-Or et al, 2014). They are still difficult to adapt to the non-linear radiometric difference between multimodal images. To improve the matching performance, some methods (e.g. HOPC and CFOG (Ye et al, 2016, 2019)) based on phase congruency model (Kovesi, 1999) have been proposed These methods still have the common problem of other area-based methods: difficult to adapt to image geometric distortion.

Objectives
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.