Abstract

Traditional approaches to video tagging are designed to propagate tags at the same level, such as assigning the tags of training videos (or shots) to the test videos (or shots), such as generating tags for the test video when the training videos are associated with the tags at the video-level or assigning tags to the test shot when given a collection of annotated shots. This paper focuses on automatical shot tagging given a collection of videos with the tags at the video-level. In other words, we aim to assign specific tags from the training videos to the test shot. The paper solves the V2S issue by assigning the test shot with the tags deriving from parts of the tags in a part of training videos. To achieve the goal, the paper first proposes a novel Graph Sparse Group Lasso (shorted for GSGL) model to linearly reconstruct the visual feature of the test shot with the visual features of the training videos, i.e., finding the correlation between the test shot and the training videos. The paper then proposes a new tagging propagation rule to assign the video-level tags to the test shot by the learnt correlations. Moreover, to effectively build the reconstruction model, the proposed GSGL simultaneously takes several constraints into account, such as the inter-group sparsity, the intra-group sparsity, the temporal-spatial prior knowledge in the training videos and the local structure of the test shot. Extensive experiments on public video datasets are conducted, which clearly demonstrate the effectiveness of the proposed method for dealing with the video-to-shot tag propagation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call