Abstract

Two novel methods for extracting distinctive invariant features from interest regions are presented in this paper. The idea of these methods are associated with that measuring similarity between visual entities from images can be based on matching the internal layout of Local Self-Similarities. The main contributions are two-folds: firstly, two new texture features called Local Self-Similarities (LSS,C) and Fast Local Self-Similarities (FLSS,C) based on Cartesian location grid, are extracted, which are the modified versions of the well-known Local Self-Similarities (LSS,LP) feature based on Log-Polar location grid. To combine the powers of the SIFT and LSS (LP), LSS and FLSS are used as the local features in the SIFT algorithm. Secondly, different from the natural LSS (LP) descriptor that chooses the maximal correlation value in each bucket to get photometric translations invariance, the proposed LSS (C) and FLSS (C) adopt distribution-based representation to achieve more robust geometric translations invariance. In the contexts of image matching and object category classification experiments, the LSS (C) and FLSS (C) both outperform the original LSS (LP), and achieve favorably comparable performance to the SIFT. Furthermore, these descriptors are low computational complexity and simpler than the SIFT.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call