Abstract
In this paper, we propose a novel hierarchical framework that combines motion and feature information to implement infrared-visible video registration on nearly planar scenes. In contrast to previous approaches, which involve the direct use of feature matching to find the global homography, the framework adds coarse registration based on the motion vectors of targets to estimate scale and rotation prior to matching. In precise registration based on keypoint matching, the scale and rotation are used in re-location to eliminate their impact on targets and keypoints. To strictly match the keypoints, first, we improve the quality of keypoint matching by using normalized location descriptors and descriptors generated by the histogram of edge orientation. Second, we remove most mismatches by counting the matching directions of correspondences. We tested our framework on a public dataset, where our proposed framework outperformed two recently-proposed state-of-the-art global registration methods in almost all tested videos.
Highlights
With the development of sensors, multi-sensor image fusion has attracted a considerable amount of research interest in recent years
We build a hierarchical registration framework where we first calculate the motion vectors of the targets, which is crucial to find an According to Equation (2), we find that once we obtain a pair of motion vectors of the targets, accurate global homography
We used novel descriptors and the mechanism of eliminating mismatches to improve the accuracy of keypoint matching; and (3) we used a reservoir based on the histogram of edge orientation (HOE) matching metric, which can save more typical matches than those used in [13,14]
Summary
With the development of sensors, multi-sensor image fusion has attracted a considerable amount of research interest in recent years. Most feature-based methods proposed in past work, such as [8,9,10], which directly adopt feature matching, find it difficult to obtain accurate correspondences to find the global homography. Such motion-based methods as [11,12] cannot implement the registration in a complex scenario. We propose a simple method to calculate the motion vectors of targets in coarse registration that transforms the scale and rotation estimation into an easy, homologous keypoint-matching problem.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have