Abstract

This work strives for the classification and localization of human actions in videos, without the need for any labeled video training examples. Where existing work relies on transferring global attribute or object information from seen to unseen action videos, we seek to classify and spatio-temporally localize unseen actions in videos from image-based object information only. We propose three spatial object priors, which encode local person and object detectors along with their spatial relations. On top we introduce three semantic object priors, which extend semantic matching through word embeddings with three simple functions that tackle semantic ambiguity, object discrimination, and object naming. A video embedding combines the spatial and semantic object priors. It enables us to introduce a new video retrieval task that retrieves action tubes in video collections based on user-specified objects, spatial relations, and object size. Experimental evaluation on five action datasets shows the importance of spatial and semantic object priors for unseen actions. We find that persons and objects have preferred spatial relations that benefit unseen action localization, while using multiple languages and simple object filtering directly improves semantic matching, leading to state-of-the-art results for both unseen action classification and localization.

Highlights

  • The goal of this paper is to classify and localize human actions in video, such as shooting a bow, doing a pull-up, and cycling

  • Human action recognition has a long tradition in computer vision, with initial success stemming from spatio-temporal interest points (Chakraborty et al 2012; Laptev 2005), dense trajectories (Wang et al 2013; Jain et al 2013), and cuboids (Kläser et al 2010; Liu et al 2008)

  • We evaluate the importance of spatial relations between persons and local object detections for unseen action classification and localization

Read more

Summary

Introduction

The goal of this paper is to classify and localize human actions in video, such as shooting a bow, doing a pull-up, and cycling. Progress has recently been accelerated by deep learning, with the introduction of video networks exploiting twostreams (Feichtenhofer et al 2016; Simonyan and Zisserman 2014) and 3D convolutions (Carreira and Zisserman 2017; Tran et al 2019; Zhao et al 2018; Feichtenhofer et al 2019) Building on such networks, current action localizers have shown the ability to detect actions precisely in both space and time, e.g., Obtaining training videos with spatio-temporal annotations (Chéron et al 2018; Mettes and Snoek 2019) is expensive and error-prone, limiting the ability to generalize to any action. We aim for action classification and localization without the need for any video examples during training

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call