Abstract

The state-of-the-art large scale image retrieval systems have mainly relied on two seminal works: the SIFT descriptor and bag-of-features (BOF) model. However, with the growth of image dataset, the discriminative power of SIFT descriptors was weakened rapidly when mapped to visual words. In this paper, we present a new approach to generate visual word pairs for image retrieval. Two different descriptors are employed to represent the same interest region, and then a visual word pair is obtained by quantizing the descriptor pair with two independent codebooks. By encoding different types of information of the same region, our approach can effectively boost the matching accuracy of descriptors. We evaluate our approach with INRIA Holidays dataset on a 120K image database, and the experiment results suggest that our approach significantly improved the retrieval performance of BOF model.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call