Abstract

To solve the task of detecting and recounting events in videos with limited training examples, we propose a novel two-stage hybrid concept temporal pooling approach that is aware of potential concept drift in the video stream. We initially partition videos into temporal pyramids consisting of keyframes. Semantic concepts in keyframes is detected, which enables us to derive aggregated detection scores for each temporal pyramid using average-pooling and ultimately for the entire video via max-pooling. Owing to this refined hybrid pooling, our method yields more discriminative semantic representations with respect to the event query. We also develop an effective filtering strategy to cope with noisy concept detectors to robustify the textual description generation in recounting. Experiments on the large scale TRECVID MEDTest dataset demonstrate our method improves the accuracies over state-of-the-art methods, both for event detection and recounting.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call