Abstract
Pre-trained visual-language (ViL) models have demonstrated good zero-shot capability in video understanding tasks, where they were usually adapted through fine-tuning or temporal modeling. However, in the task of open-vocabulary temporal action localization (OV-TAL), such adaption reduces the robustness of ViL models against different data distributions, leading to a misalignment between visual representations and text descriptions of unseen action categories. As a result, existing methods often strike a trade-off between action detection and classification. Aiming at this issue, this paper proposes DeTAL, a simple but effective two-stage approach for OV-TAL. DeTAL decouples action detection from action classification to avoid the compromise between them, and the state-of-the-art methods for close-set action localization can be handily adapted to OV-TAL, which significantly improves the performance. Meanwhile, DeTAL can easily tackle the scenario where action category annotations are unavailable in the training dataset. In the experiments, we propose a new cross-dataset setting to evaluate the zero-shot capability of different methods. And the results demonstrate that DeTAL outperforms the state-of-the-art methods for OV-TAL on both THUMOS14 and ActivityNet1.3.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE transactions on pattern analysis and machine intelligence
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.