Abstract

The objective of Content-based Image Retrieval (CBIR) systems is to return the most similar images given an image query. In this scenario, accurately ranking collection images is of great relevance. In general, CBIR systems consider only pairwise image analysis, that is, compute similarity measures considering only pair of images, ignoring the rich information encoded in the relations among several images. This paper presents a novel re-ranking approach based on contextual spaces aiming to improve the effectiveness of CBIR tasks, by exploring relations among images. In our approach, information encoded in both distances among images and ranked lists computed by CBIR systems are used for analyzing contextual information. The re-ranking method can also be applied to other tasks, such as: (i) for combining ranked lists obtained by using different image descriptors (rank aggregation); and (ii) for combining post-processing methods. We conducted several experiments involving shape, color, and texture descriptors and comparisons to other post-processing methods. Experimental results demonstrate the effectiveness of our method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call