Abstract

Robust local descriptors usually consist of high-dimensional feature vectors to describe distinctive characteristics of images. The high dimensionality of a feature vector incurs considerable costs in terms of computational time and storage. It also results in the curse of dimensionality that affects the performance of several tasks that use feature vectors, such as matching, retrieval, and classification of images. To address these problems, it is possible to employ some dimensionality reduction techniques, leading frequently to information lost and, consequently, accuracy reduction. This work aims at applying linear dimensionality reduction to the scale invariant feature transformation and speeded up robust feature descriptors. The objective is to demonstrate that even risking the decrease of the accuracy of the feature vectors, it results in a satisfactory trade-off between computational time and storage requirements. We perform linear dimensionality reduction through random projections, principal component analysis, linear discriminant analysis, and partial least squares in order to create lower dimensional feature vectors. These new reduced descriptors lead us to less computational time and memory storage requirements, even improving accuracy in some cases. We evaluate reduced feature vectors in a matching application, as well as their distinctiveness in image retrieval. Finally, we assess the computational time and storage requirements by comparing the original and the reduced feature vectors.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call