Abstract

Referring video object segmentation (R-VOS), the task of separating the object described by a natural language query from the video frames, has become increasingly critical with recent advances in multi-modal understanding. Existing approaches are mainly visual-dominant in both representation-learning and decision-making process, and are less sensitive to fine-grained clues in text description. To address this, we propose a language-guided contrastive learning and data augmentation framework to enhance the model sensitivity to the fine-grained textual clues (i.e., color, location, subject) in the text that relate heavily to the video information. By substituting key information of the original sentences and paraphrasing them with a text-based generation model, our approach conducts contrastive learning through automatically building diverse and fluent contrastive samples. We further enhance the multi-modal alignment with a sparse attention mechanism, which can find the most relevant video information by optimal transport. Experiments on a large-scale R-VOS benchmark show that our method significantly improves strong Transformer-based baselines, and further analysis demonstrates the better ability of our model in identifying textual semantics.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.