Abstract

We proposes a method for fast matching SIFT feature points based on SIFT feature descriptor vector element matching. First, we discretize each dimensional feature element into an array address based on a fixed threshold value and store the corresponding feature point labels in an address. If the same dimensional feature element of the descriptor vector has the same discrete value, their feature point labels may fall into the same address. Secondly, we search the mapping address of the feature descriptor vector element to obtain the matching state of the corresponding dimensions of the feature descriptor vector, thus obtaining the number of dimensions matching between feature points and feature dimension matching degree. Then we use the feature dimension matching degree to obtain the suspect matching feature points. Finally we use the Euclidean distance to eliminate the mismatching feature points to obtain accurate matching feature point pairs. The method is essentially a high-dimensional feature vector matching method based on local feature vector element matching. Experimental results show that the new algorithm can guarantee the number of matching SIFT feature points and their matching accuracy and that its running time is similar to that of HKMT, RKDT and LSH algorithms

Highlights

  • Image feature matching is essential and crucial for computer vision, and can be employed in image retrieval[1], image stitching[2], visual reality[3], and scene reconstruction[4,5,6]

  • The experimental results show that the performance of our proposed method is similar to that of hierarchical k-means tree (HKMT), random KD tree (RKDT) and locality sensitive hashing (LSH)

  • In order to evaluate the effectiveness of the proposed FDAM algorithm, we compare its performances with those of the three ANN algorithms: the hierarchical k-means tree (HKMT) algorithm, the random KD tree (RKDT) algorithm and the local sensitive hashing (LSH) algorithm and the exhaustive algorithm

Read more

Summary

Introduction

Image feature matching is essential and crucial for computer vision, and can be employed in image retrieval[1], image stitching[2], visual reality[3], and scene reconstruction[4,5,6]. Local features are extracted from each image in these applications, and the local descriptor with invariance is computed. By applying the nearest neighbor search of features, the most similar feature can be found to achieve the image matching. SIFT[7] is known as the most representative feature extraction and matching method currently, and it had been improved by many researchers[8,9,10]. The feature-points extracted from SIFT descriptors are extremely robust, it is adapt to illumination change and deformation. It is still the most popular local descriptors to solve the image matching problems. The experimental results show that the performance of our proposed method is similar to that of HKMT, RKDT and LSH

Related work
Output the matching point pairs
Experimental results and analysis
Evaluation of artificial transformation test
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call