Abstract

Most feature-matching algorithms based on perspective images, such as scale-invariant feature transform (SIFT), speeded up robust features, or DAISY, construct their feature descriptors from the neighborhood information of feature points. Large nonlinear distortion results in different amounts of neighborhood information at different feature points within the fish-eye images, especially for the case when a feature pixel is at the central region and the corresponding feature pixel is at the periphery. In contrast, descriptor-Nets (D-Nets) is a feature-matching algorithm based on global information. It is more robust, but it is time-consuming. In this paper, we employ the SIFT detector to extract feature pixels, and then we propose a novel feature-matching strategy based on the D-Nets algorithm. We modify the linear descriptors in the traditional D-Nets algorithm and propose a curve descriptor based on the hemispheric model of a fish-eye image. In the traditional D-Nets algorithm, each feature point is described by all other pixels of the entire image, and complicated calculations cause slow matching speed. To solve this problem, we convert the traditional global D-Nets into a novel local D-Nets. In the experiment, we obtain image pairs from real scenery using the binocular fish-eye camera platform. Experimental results show that the proposed local D-Nets method can achieve more than 3 times the initial matching pixels, and the percentage of bad matching is reduced by 40% compared with the best performing method among the comparison methods. In addition, the matching pixel pairs obtained by the proposed method are evenly distributed, either in the center region with small distortion or in the peripheral region with large distortion. Meanwhile, the local D-Nets algorithm is 16 times less than that of the global D-Nets algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call