Abstract

Localizing moments in a longer video via natural language queries is a new, challenging task at the intersection of language and video understanding. Though moment localization with natural language is similar to other language and vision tasks like natural language object retrieval in images, moment localization offers an interesting opportunity to model temporal dependencies and reasoning in text. We propose a new model that explicitly reasons about different temporal segments in a video, and shows that temporal context is important for localizing phrases which include temporal language. To benchmark whether our model, and other recent video localization models, can effectively reason about temporal language, we collect the novel TEMPOral reasoning in video and language (TEMPO) dataset. Our dataset consists of two parts: a dataset with real videos and template sentences (TEMPO - Template Language) which allows for controlled studies on temporal language, and a human language dataset which consists of temporal sentences annotated by humans (TEMPO - Human Language).

Highlights

  • IntroductionQueries like “the girl bends down” require understanding objects and actions, but do not require reasoning about different video moments

  • We note that strong supervision (SS) outperforms weak supervision (WS) and that the context temporal endpoint features (TEF) is important for best performance

  • We compare our bestperforming model from training on the TEMPOTL to prior work (MCN and TALL) and to Moment Localization with Latent Context (MLLC) with global and before/after context

Read more

Summary

Introduction

Queries like “the girl bends down” require understanding objects and actions, but do not require reasoning about different video moments. Queries like “the little girl talks after bending down” require reasoning about the temporal relationship between different actions (“talk” and “bend down”). Localizing natural language queries in video is an important challenge, recently studied in Hendricks et al (2017) and Gao et al (2017) with applications in areas such as video search and retrieval. Query: The little girl talks after bending down.

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call