Abstract

Transformers with collaborative experts have become a powerful framework for video-text retrieval. In specific, experts understand the specialized property of each domain (e.g., appearance, motion, and audio) from videos and the video encoder aggregates those expertise features. However, previous works implicitly guide the video transformer by solving auxiliary video-text tasks with expertise features, since concatenation for the video transformer is the only effort to exploit the knowledge of experts. In this paper, we propose an expert-guided contrastive loss in order to fully exploit expert knowledge from videos. In detail, we sample a positive bag using an expert-wise similarity matrix to learn text encoder and decompose text representation into dynamic and static factors from given videos. Through extensive experiments, we verify the effectiveness of the proposed methods. Notably, we also demonstrate that our method brings significant improvements under the expert-based framework and it can collaborate with CLIP-based architectures for further performance boosts.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call