Abstract

Question answering (QA)-based re-ranking methods for cross-modal retrieval have been recently proposed to further narrow down similar candidate images. The conventional QA-based re-ranking methods provide questions to users by analyzing candidate images, and the initial retrieval results are re-ranked based on the user's feedback. Contrary to these developments, only focusing on performance improvement makes it difficult to efficiently elicit the user's retrieval intention. To realize more useful QA-based re-ranking, considering the user interaction for eliciting the user's retrieval intention is required. In this paper, we propose a QA-based re-ranking method with considering two important factors for eliciting the user's retrieval intention: query-image relevance and recallability. Considering the query-image relevance enables to only focus on the candidate images related to the provided query text, while, focusing on the recallability enables users to easily answer the provided question. With these procedures, our method can efficiently and effectively elicit the user's retrieval intention. Experimental results using Microsoft Common Objects in Context and computationally constructed dataset including similar candidate images show that our method can improve the performance of the cross-modal retrieval methods and the QA-based re-ranking methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call