Abstract

An effective feature descriptor is proposed for multimodal local-image patch matching. The conventional self-similarity hypercube (SSH) fails in multimodal image matching due to different intensities of multimodal images. To mitigate this problem, a dual-codebook clustering is proposed for generating the descriptors. It is based on extracting a codebook, respectively, from visible and thermal images but sharing the same k -means clustering index of the local features of visible and thermal image patches. The experimental results show that the proposed approach effectively solves the multimodal image quantisation problem. Moreover, a voting strategy based on the proposed similarity family function facilitates the multimodal image matching more robustly compared with the conventional state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call