Abstract

The scale-invariant feature transform (SIFT) descriptor has been widely applied in many fields due to its resistance to common image transformations. However, the dimension of SIFT is high which makes it not practical in limited-memory systems. Thus, some lower-dimension SIFTs are proposed by using subspace projection techniques. The most popular technique is Principle Component Analysis (PCA) which can produce two different lower-dimension SIFTs, PCA-SIFT and PSIFT. They apply PCA on gradient field of local patches or on a set of training descriptors, respectively. However, the other subspace techniques can be also used. This paper proposes two more low-dimensional SIFTs (namely LPP-SIFT and SPCA-SIFT) by incorporating manifold subspace and sparse eigenspace learning techniques (Locality Preserving Projection and Sparse PCA are used as the exemplary implementations). Although these techniques are not novel, our results demonstrate they can be used to produce low-dimensional SIFTs. More importantly, by comparing their performance to the existing low-dimension SIFTs, we show which of them are more suitable for image matching.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.