Abstract

Keypoint matching is of fundamental importance in computer vision applications. Fish-eye lenses are convenient in such applications that involve a very wide angle of view. However, their use has been limited by the lack of an effective matching algorithm. The Scale Invariant Feature Transform (SIFT) algorithm is an important technique in computer vision to detect and describe local features in images. Thus, we present a Tri-SIFT algorithm, which has a set of modifications to the SIFT algorithm that improve the descriptor accuracy and matching performance for fish-eye images, while preserving its original robustness to scale and rotation. After the keypoint detection of the SIFT algorithm is completed, the points in and around the keypoints are back-projected to a unit sphere following a fish-eye camera model. To simplify the calculation in which the image is on the sphere, the form of descriptor is based on the modification of the Gradient Location and Orientation Histogram (GLOH). In addition, to improve the invariance to the scale and the rotation in fish-eye images, the gradient magnitudes are replaced by the area of the surface, and the orientation is calculated on the sphere. Extensive experiments demonstrate that the performance of our modified algorithms outweigh that of SIFT and other related algorithms for fish-eye images.

Highlights

  • Visual feature extraction and matching are the most basic and difficult problems in computer vision and application of optical engineering

  • For the RD-Scale Invariant Feature Transform (SIFT) algorithm, the performance is better at 10% and 20% degrees of distortion

  • We investigated the problem of matching feature points in fisheye images

Read more

Summary

Introduction

Visual feature extraction and matching are the most basic and difficult problems in computer vision and application of optical engineering. A camera equipped with micro-lenses and borescopes enables the visual inspection of cavities that are difficult to access [1], whereas a camera equipped with a fish-eye lens can acquire wide field-of-view (FOV) images for a thorough visual coverage of environments. Such a camera improves the performance of geomotion estimation by avoiding the ambiguity between translation and rotation motions [2,3]. We propose the Tri-SIFT feature matching method to overcome radial distortion of fish-eye cameras.

Related Work
SIFT Algorithm Theory
Tri-SIFT Algorithm
Back-Projection
Test images containing varying degrees of distortion
Orientation
The point set Pgori is Delaunay
Result
G1G2 O circle and must be G located
G1G2 Otri G3 0
The Descriptor Construction
Experiment
11. In Table we list theconsidering resulting matches of theand standard
Findings
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call