Abstract

In this paper, we focus on the composed query image retrieval task, namely retrieving the target images that are similar to a composed query, in which a modification text is combined with a query image to describe a user's accurate search intention. Previous methods usually focus on learning the joint image-text representations, but rarely consider the intrinsic relationship among the query image, the target image and the modification text. To address this problem, we propose a new cross-modal joint prediction and alignment framework for composed query image retrieval. In our framework, the modification text is regarded as an implicit transformation between the query image and the target image. Motivated by that, not only the combination of the query image and modification text should be similar to the target image, but also the modification text should be predicted according to the query image and the target image. We devote to aligning this relationship by a novel Joint Prediction Module (JPM). Our proposed framework can seamlessly incorporate the JPM into the existing methods to effectively improve the discrimination and robustness of visual and textual representations. The experiments on three public datasets demonstrate the effectiveness of our proposed framework, proving that our proposed JPM can be simply incorporated with the existing methods while effectively improving the performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call