Abstract

Human action recognition is a challenging task in machine learning and pattern recognition. This paper presents an action recognition framework based on depth sequences. An effective feature descriptor named depth motion maps pyramid (DMMP) inspired by DMMs is developed. First, a series of DMMs with temporal scales are constructed to effectively capture spatial–temporal motion patterns of human actions. Then these DMMs are fused to obtain the final descriptor named DMMs pyramid. Second, we propose a discriminative collaborative representation classifier (DCRC), where an extra constraint on the collaborative coefficient is imposed to provide prior knowledge for the representation coefficient. In addition, we apply DCRC to encode the obtained features and recognize the human actions. The proposed framework is evaluated on MSR three-dimensional (3-D) action datasets, MSR hand gesture dataset, UTD-MHAD, and MSR daily Activity3D dataset, respectively. The experimental results indicate the effectiveness of our proposed method for human action recognition.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call