Abstract

Text-to-video temporal grounding aims to locate a target video moment that semantically corresponds to the given sentence query in an untrimmed video. In this task, fully supervised works require text descriptions for each event along with its temporal segment coordinate for training, which is labor-consuming. Existing weakly supervised works require only video-sentence pairs but cannot achieve satisfactory performance. However, many available annotations in the form of coarse temporal boundaries for sentences are ignored and unexploited. These coarse boundaries are common in streaming media platform and can be collected in a mechanical manner. We propose a novel approach to perform fine-grained text-to-video temporal grounding from these coarse boundaries. We take dense video captioning as base task and leverage the trained captioning model to identify the relevance of each video frame to the sentence query according to the frame participation in event captioning. To quantify the frame participation in event captioning, we propose event activation sequence , a simple method that highlights the temporal regions which have high correlations to the text modality in videos. Experiments on modified ActivityNet Captions and a use case demonstrate the promising fine-grained performance of our approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call