Abstract

Reliable and robust matching of multimodal images is a challenging task because of the nonlinear grayscale distortion caused by radiometric differences and the geometric deformations caused by viewpoint changes between multimodal images. To address these problems, we propose a dense descriptor that uses multiscale structure principal direction to capture the structural features of the image, as well as a novel dissimilarity measurement. The proposed descriptor adapts to nonlinear gray distortion effectively and is robust to image noise. The dissimilarity measurement addresses the reversal of direction caused by intensity inversion. In addition, we propose an improved pixel selection method to speed up the algorithm. Experimental results show that the proposed matching algorithm is offers a superior performance relative to the current state-of-the-art methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.