Abstract

Image matching has been one of the most fundamental issues in computer vision over the decades. We propose a method based on utilizing feature lines in order to achieve more robust image matching, which includes feature line detection, feature vector description and matching, and the devised rotation invariant feature line transform. The feature vectors have the properties of rotation and scaling invariance. Experimental results demonstrate the effectiveness and efficiency of the proposed method. Compared with the famous powerful algorithm scale invariant feature transform, the proposed method is more insensitive to noise, and the selected distinctive locations of features are more disperse. For a certain sequence of images, which contain strong lines, the proposed method is more efficient. Using the feature lines obtained by our method, it is possible to match two scene images with different rotation angles, scales, and light distortion, and the steps of matching are simpler.

Highlights

  • The task of finding correspondences between two images of the same scene or object, taken from different times of a day or year, different sensors, and different viewing geometries under different circumstances, is an important and difficult part of many computer vision applications

  • We propose a method based on utilizing feature lines in order to achieve more robust image matching, which includes feature line detection, feature vector description and matching, and the devised rotation invariant feature line transform

  • The most widely used detector probably is the Harris corner detector, which has been proposed in 1988; Harris corners are robust to changes in rotation and intensity but are very sensitive to changes in scale, so they do not provide a good basis for matching images of different sizes

Read more

Summary

Introduction

The task of finding correspondences between two images of the same scene or object, taken from different times of a day or year, different sensors, and different viewing geometries under different circumstances, is an important and difficult part of many computer vision applications. Lowe[2] used a scale-invariant detector that localizes points at local scale-space maxima of the difference-of-Gaussian (DoG) in 1999, and in 2004, he presented a method called scale invariant feature transform (SIFT) for extracting distinctive invariant features from images that can be used to perform reliable matching between images.[3] This approach transforms an image into a large collection of local feature vectors, each of which is invariant to image translation, scaling, rotation, and partially invariant to illumination changes and affine or three dimensional (3-D) projection. SIFT detects feature points by searching over all scales and image locations. This paper proposes a method for image matching based on feature lines rather than feature points. This paper is organized as follows: Sec. 2 describes the scheme of feature lines detection, in which distinctive feature lines with specific orientations can be detected; Sec. 3 explains polar transform of the neighborhood region of the feature lines, and a new rotation/scale invariant descriptor is presented.

Feature Lines Detection
Feature Line Descriptor
Local Polar Image Descriptor
Feature Line Matching
Experiment Results
Complexity Analysis
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call