Abstract

In order to improve the invariance of the initial SIFT (Scale Invariant Feature Transform) and reduce mismatches when multiple local regions are similar, a novel method for image local invariant feature description based on SIFT is proposed. We introduce a new texture feature descriptor called center-symmetric improved local ternary patterns (CSILTP) that is a modified version of the well-known local binary patterns (LBP) feature and then fuse it with the global context information as the local feature descriptor in the SIFT algorithm. In the feature detection step, through iterative transformation, the initial feature point derived from the SIFT converges to affine invariant point and affine invariant region is extracted. In the feature description step, the dominant orientation is computed for each feature point. Then the image local feature descriptor is obtained by calculating the CS-ILTP and global context respectively. The feature matching experiment shows that the image local invariant feature descriptor present in this paper is invariant to image affine, scaling, rotation, illumination changes and so on. What's more, the number of correct matching features in artificial landmarks matching has increased by nearly 91% compared with the SIFT algorithm. The algorithm with high robustness and uniqueness is useful for the fields of robot navigation, image retrieval, etc.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call