Abstract

In this paper, we study the cross-modal image retrieval, where the inputs contain a source image plus some text that describes certain modifications to this image and the desired image. Prior work usually uses a three-stage strategy to tackle this task: 1) extracting the features of the inputs; 2) fusing the features of the source image and its modified text to obtain the fusion feature; 3) learning a similarity metric between the desired image and the source image plus modified text via deep metric learning. Since classical image/text encoders can learn useful representations and common pair-based loss functions of distance metric learning are enough for cross-modal retrieval, people usually improve retrieval accuracy by designing new fusion networks. However, these methods do not successfully handle the modality gap caused by the inconsistent feature distributions of different modalities, which greatly influences the feature fusion and the similarity learning. To alleviate this problem, we apply the contrastive self-supervised learning method Deep InfoMax (DIM) [1] to our approach to bridge this gap by enhancing the dependence between the text, the image, and their fusion. Specifically, our method narrows the modality gap between the text modality and the image modality by maximizing mutual information between their semantically inconsistent representations. Moreover, we seek an effective common subspace for the semantically consistent features of the fusion and the desired images by utilizing Deep InfoMax between the low-level layer of the image encoder and the high-level layer of the fusion network. Extensive experiments on three large-scale benchmarks show that we have bridged the modality gap between different modalities and achieve the state-of-the-art retrieval performance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.