Abstract

Referring Expression Comprehension (REC) is an important task in the vision-and-language community, since it is an essential step for many cross-modal tasks such as VQA, image retrieval and image caption. To obtain a better trade-off between speed and accuracy, existing researches usually follow a one-stage paradigm, where this task can be considered as a language-conditioned object detection task. Meanwhile, previous one-stage REC frameworks provide many different research perspectives, such as the strategies of fusion, the stage of fusion and the design of detection head. Surprisingly, these works mostly ignore the value of integrating multi-level features and even only apply single-scale features to locate the target. In this paper, we focus on rethinking and improving feature pyramids for one-stage REC. By experimental validations, we first prove that although multi-scale fusion is an effective approach for improving performance, the mature neck structures from object detection (e.g., FPN, BFN and HRFPN) have a limited impact on this task. Further, we visualize the outputs of FPN and find the underlying reason is that these coarse-grained FPN fusion strategies suffer from semantic ambiguity problem. Based on the above insights, we propose a new Language-Guided FPN (LG-FPN) method, which can dynamically allocate and select the fine-grained information by stacking language-gate and union-gate. A large number of contrastive and ablative experiments show that our LG-FPN is an effective and reliable module that can adapt to different visual backbones, fusion strategies and detection heads. Finally, our method achieves state-of-the-art performance on four referring expression datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call