Abstract
Human action recognition is a challenging task in machine learning and pattern recognition. This paper presents an action recognition framework based on depth sequences. An effective feature descriptor named depth motion maps pyramid (DMMP) inspired by DMMs is developed. First, a series of DMMs with temporal scales are constructed to effectively capture spatial–temporal motion patterns of human actions. Then these DMMs are fused to obtain the final descriptor named DMMs pyramid. Second, we propose a discriminative collaborative representation classifier (DCRC), where an extra constraint on the collaborative coefficient is imposed to provide prior knowledge for the representation coefficient. In addition, we apply DCRC to encode the obtained features and recognize the human actions. The proposed framework is evaluated on MSR three-dimensional (3-D) action datasets, MSR hand gesture dataset, UTD-MHAD, and MSR daily Activity3D dataset, respectively. The experimental results indicate the effectiveness of our proposed method for human action recognition.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.