Abstract

Referring expression comprehension (REC) is a challenging task that involves locating a particular object in an image based on a natural language query. Despite REC showing potential for identifying objects beyond a fixed set of predefined categories, existing models display limited accuracy when confronted with categories not seen during training. To overcome this limitation, in this work, we introduce a new setting called Open-Category Referring Expression Comprehension that focuses more on model generalization capabilities on unseen categories, and present an Multi-modal Knowledge Transfer REC (MTKREC) framework to address this challenge. Specifically, to handle various novel categories, our framework initially constructs an isolated proposal embedding method that integrates pre-training knowledge from CLIP. This method isolates object proposals by cropping them, passing them to CLIP for box-level embedding, and concurrently obtaining box-level proposal embedding from Faster-RCNN. Then, inspired by ResNet, our framework proposes a Residual Self-Attention (RSA) strategy within the fusion module to maximize the utilization of information from the isolated proposal embedding method. To further bolster the model’s capabilities, we transfer knowledge from UNITER by reusing its parameters during the multi-modal fusion process, and explore knowledge distillation techniques to accelerate the model’s performance. We also construct new datasets sub-sampled from RefCOCO, RefCOCO+, and RefCOCOg datasets, that enable evaluation for our model. Extensive experiments on new datasets demonstrate the effectiveness of our framework.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call