Abstract

Visual grounding, aiming to align image regions with textual queries, is a fundamental task for cross-modal learning. We study the weakly supervised visual grounding, where only image-text pairs at a coarse-grained level are available. Due to the lack of fine-grained correspondence information, existing approaches often encounter matching ambiguity. To overcome this challenge, we introduce the cycle consistency constraint into region-phrase pairs, which strengthens correlated pairs and weakens unrelated pairs. This cycle pairing makes use of the bidirectional association between image regions and text phrases to alleviate matching ambiguity. Furthermore, we propose a parallel grounding framework, where backbone networks and subsequent relation modules extract individual and contextual representations to calculate context-free and context-aware similarities between regions and phrases separately. Those two representations characterize visual/linguistic individual concepts and inter-relationships, respectively, and then complement each other to achieve cross-modal alignment. The whole framework is trained by minimizing an image-text contrastive loss and a cycle consistency loss. During inference, the above two similarities are fused to give the final region-phrase matching score. Experiments on five popular datasets about visual grounding demonstrate a noticeable improvement in our method. The source code is available at https://github.com/Evergrow/WSVG.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call