Abstract

This paper presents a simple and computationally efficient framework for human action recognition based on modeling the motion of human body parts. Intuitively, a collective understanding of human part movements can lead to better understanding and representation of any human action. In this paper, we propose a generative representation of the motion of the human body parts to learn and classify human actions. The proposed representation combines the advantages of both local and global representations, encoding the relevant motion information as well as being robust to local appearance changes. Our work is motivated by the pictorial structures model and the framework of sparse representations for recognition. Human part movements are represented efficiently through quantization in the polar space. The key discrimination within each action is efficiently encoded by sparse representation to perform classification. The proposed method is evaluated on both the KTH and the UCF action datasets and the results are compared against other state-of-the-art methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call