Abstract
Visual grounding aims to locate a specific region in a given image guided by a natural language query. It relies on the alignment of visual information and text semantics in a fine-grained fashion. We propose a one-stage visual grounding model based on cross-modal feature fusion, which regards the task as a coordinate regression problem and implement an end-to-end optimization. The coordinates of bounding box are directly predicted by the fusion features, but previous fusion methods such as element-wise product, summation, and concatenation are too simple to combine the deep information within feature vectors. In order to improve the quality of the fusion features, we incorporate co-attention mechanism to deeply transform the representations from two modalities. We evaluate our grounding model on publicly available datasets, including Flickr30k Entities, RefCOCO, RefCOCO+ and RefCOCOg. Quantitative evaluation results show that co-attention mechanism plays a positive role in multi-modal feature fusion for the task of visual grounding.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.