Abstract

Visual Grounding (VG) is a task of locating a specific object in an image semantically matching a given linguistic expression. The mapping of the linguistic and visual contents and the understanding of diverse linguistic expressions are the two challenges of this task. The performance of visual grounding is consistently improved by deep visual features in the last few years. While deep visual features contain rich information, they could also be noisy, biased and easily over-fitted. In contrast, symbolic features are discrete, easy to map and usually less noisy. In this work, we propose a novel modular network learning to match both the object's symbolic features and conventional visual features with the linguistic information. Moreover, the Residual Attention Parser is designed to alleviate the difficulty of understanding diverse expressions. Our model achieves competitive performance on three popular datasets of VG.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.