Abstract

This paper addresses the problem of silhouette-based human action segmentation and recognition in monocular sequences. Motion History Images (MHIs), used as 2D templates, capture motion information by encoding where and when motion occurred in the images. Inspired by codebook approaches for object and scene categorization, we first construct a codebook of temporal motion templates by clustering all the MHIs of each particular action. These MHIs capture different actors, speeds and a wide range of camera viewpoints. In this paper, we use a Kohonen's Self-Organizing Map (SOM) to simultaneously cluster the MHI templates and represent them in lower dimensional subspaces. To cope with temporal segmentation, and concurrently carry out action recognition, a new architecture is proposed where the obsrvation MHIs are projected onto all these action-specific manifolds and the Euclidean distance between each MHI and the nearest cluster within each action-manifold constitutes the observation vector of a Markov Model. To estimate the state/action at each time step, we introduce a new method based on Observable Markov Models (OMMs) where the Markov model is augmented with a neutral state. The combination of our action-specific manifolds with the augmented OMM allows to automatically segment and recognize long sequences of consecutive actions, without any prior knowledge about initial and ending frames of each action. Importantly, our method allows to interpolate betweeen training viewpoint and recognizes actions, independently of the camera viewpoint, even from unseen viewpoints.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.