Abstract

Zero-shot Action Recognition (ZSAR) aims at bridging the video→class relation with only labeled training data of seen classes while generalizing the model to alleviate the heterogeneity of unseen actions. Most existing methods have comprehensively represented videos and action classes, however, the semantic gap and the hubness problem between them remain crucial challenges that are under-explored. In this paper, we propose an effective method to tackle the above issues. Specifically, to narrow the semantic gap, we end-to-end generate a spatio-temporal semantics for each video, which provides essential textual information to refine the video representation. Furthermore, we propose a compactness-separability loss that optimizes the intra- and inter-class relations in a unified formula and quantitatively constrains cluster distribution, thus effectively diminishing the impact of the hubness problem. Extensive experiments on UCF101, HMDB51, and Olympic Sports datasets prove the effectiveness of the proposed approach and demonstrate our approach outperforms the state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call