Abstract

AbstractIn recent years, cross‐modal retrieval has been a popular research topic in both fields of computer vision and natural language processing. There is a huge semantic gap between different modalities on account of heterogeneous properties. How to establish the correlation among different modality data faces enormous challenges. In this work, we propose a novel end‐to‐end framework named Dual Multi‐Angle Self‐Attention (DMASA) for cross‐modal retrieval. Multiple self‐attention mechanisms are applied to extract fine‐grained features for both images and texts from different angles. We then integrate coarse‐grained and fine‐grained features into a multimodal embedding space, in which the similarity degrees between images and texts can be directly compared. Moreover, we propose a special multistage training strategy, in which the preceding stage can provide a good initial value for the succeeding stage and make our framework work better. Very promising experimental results over the state‐of‐the‐art methods can be achieved on three benchmark datasets of Flickr8k, Flickr30k, and MSCOCO.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.