Abstract

Image re-ranking is effective in improving performance of text-based image searches. However, improvements from existing re-ranking algorithms are limited by two factors: one is that the associated textual information of images often mismatches their actual visual contents; the other is that a visual's features cannot accurately describe the semantic similarities between images. In this paper, we adopt click data to bridge the semantic gap. We propose a novel multi-view hypergraph-based learning (MHL) method that adaptively integrates click data with varied visual features. In particular, MHL considers pairwise discriminative constraints from click data to maximally distinguish images with high click counts from images with no click counts, and a semantic manifold is constructed. It then adopts hypergraph learning to build multiple manifolds from varied visual features. Finally, MHL integrates the semantic manifold with visual manifolds through an iterative optimization procedure. The weights of different manifolds and the re-ranking score are simultaneously obtained after using this optimization strategy. We conduct experiments on real world datasets and the results demonstrate that MHL outperforms state-of-the-art image re-ranking methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call