Abstract

The traditional bag-of-visual-words (BOW) model quantifies the local features to the visual words to achieve efficient content-based image retrieval. However, since it causes considerable quantisation error and ignores the spatial relationships between visual words, the accuracy of partial-duplicate image retrieval based on BOW model is limited. In order to reduce the quantisation error and improve the discriminability of visual words, many partial-duplicate image retrieval methods have been proposed, which make use of the advantages of the geometric clues between visual words. In this paper, we propose a novel partial-duplicate scheme by using both spatial and visual contextual clues for removing the false matches effectively, which not only encodes the relationships of orientation, distance and dominant orientation between the referential visual word and its context, but also takes the colour information between visual words into consideration. Experimental results reveal that our proposed algorithm achieves superior performance to the state-of-the-art methods for partial-duplicate image retrieval.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call