Abstract

Our goal is to boost text-based image search results via image reranking. There are diverse modalities (features) of images that we can leverage for reranking, however, the effects of different modalities are query-dependent. The primary challenge we face is how to fuse multiple modalities adaptively for different queries, which has often been overlooked in previous reranking research. Moreover, multimodality fusion without an understanding of the query is risky, and may lead to incorrect judgment in reranking. Therefore, to obtain the best fusion weights for the query, in this paper, we leverage click-through data, which can be viewed as an "implicit" user feedback and an effective means of understanding the query. A novel reranking algorithm, called click-based relevance feedback, is proposed. This algorithm emphasizes the successful use of click-through data for identifying user search intention, while leveraging multiple kernel learning algorithm to adaptively learn the query-dependent fusion weights for multiple modalities. We conduct experiments on a real-world data set collected from a commercial search engine with click-through data. Encouraging experimental results demonstrate that our proposed reranking approach can significantly improve the NDCG@10 of the initial search results by 11.62%, and can outperform several existing approaches for most kinds of queries, such as tail, middle, and top queries.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.