Abstract

Human activity, which usually consists of several actions (sub-activities), generally covers interactions among persons and/or objects. In particular, human actions involve certain spatial and temporal relationships, are the components of more complicated activity, and evolve dynamically over time. Therefore, the description of a single human action and the modeling of the evolution of successive human actions are two major issues in human activity recognition. In this paper, we develop a method for human activity recognition that tackles these two issues. In the proposed method, an activity is divided into several successive actions represented by spatio-temporal patterns, and the evolution of these actions are captured by a sequential model. A refined comprehensive spatio-temporal graph is utilized to represent a single action, which is a qualitative representation of a human action incorporating both the spatial and temporal relations of the participant objects. Next, a discrete hidden Markov model is applied to model the evolution of action sequences. Moreover, a fully automatic partition method is proposed to divide a long-term human activity video into several human actions based on variational objects and qualitative spatial relations. Finally, a hierarchical decomposition of the human body is introduced to obtain a discriminative representation for a single action. Experimental results on the Cornell Activity Dataset demonstrate the efficiency and effectiveness of the proposed approach, which will enable long videos of human activity to be better recognized.

Highlights

  • The automated recognition of human behavior in a video has attracted much interest in the computer-vision domain [1]–[5] because of its wide range of applications, including video surveillance [6], health care and social assistance [7], human-computer interactions, entertainment [8], and so on

  • The main contribution of this paper includes the following three aspects: 1) We develop an efficient method for human activity recognition, in which a long-term human activity video is divided into several successive human actions represented as spatio—temporal patterns, and the evolution of these human actions are modeled by Hidden Markov Models (HMMs)

  • Many common human actions are shared by most human activities whereas significant and distinguishing human actions are fewer in number; this imbalance leads to inferior vector quantization results when we apply K-means to cluster all the data directly

Read more

Summary

INTRODUCTION

The automated recognition of human behavior in a video has attracted much interest in the computer-vision domain [1]–[5] because of its wide range of applications, including video surveillance [6], health care and social assistance [7], human-computer interactions, entertainment [8], and so on. We represent a long-term human activity by dynamic qualitative spatial–temporal graphs constructed over a short time period, while a bag-of-words approach is applied to. CAD-120 is a publicly available benchmark dataset, and includes video sequences of interactions among objects and humans performing daily real-world activities These long human activity videos (high-level) can be considered as concatenations of actions (low-level) and are ideally compatible with the proposed methodology. The main contribution of this paper includes the following three aspects: 1) We develop an efficient method for human activity recognition, in which a long-term human activity (highlevel) video is divided into several successive human actions (low-level) represented as spatio—temporal patterns, and the evolution of these human actions are modeled by HMMs. 2) We improve the qualitative spatio-temporal graph presented in [15] with direction relations to represent human actions.

RELATED WORK
HUMAN ACTION REPRESENTATION
DISCRETE HM
VECTOR QUANTIZATION
ACTIVITY RECOGNITION WITH DISCRETE HMMs
EXPERIMENTAL STUDIES
Findings
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call