Abstract

This paper proposes an approach to reliably matching keypoints on multi-sensor images. Keypoint matching techniques have been successfully applied to a wide range of fields, but in most cases their success is limited to single-mode images that are acquired by the same type of sensor, e.g., both images are Landsat TIRS. The common information between multi-sensor images is typically much less than that between single-sensor images, which compromises the matching ability of descriptors and thus causes keypoint mismatchings. Observing this, we propose utilizing the edge information outside of the window for computing descriptors to improve the matching performance of descriptors. The edge information is obtained over entire images and hence encodes a global descriptor that is complementary to the local descriptor. Experimental results show that the correct rate of keypoint matchings increase on multi-sensor images.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call