Abstract

This paper proposes a framework for recognizing human actions from depth video sequences by designing a novel feature descriptor based on Depth Motion Maps (DMMs), Contour let Transform (CT) and Histogram of Oriented Gradients (HOGs). First, CT is implemented on the generated DMMs of a depth video sequence and then HOGs are computed for each contour let sub-band. Finally, the concatenation of these HOG features is used as a feature descriptor for the depth video sequence. With this new feature descriptor, the l2-regularized collaborative representation classifier is utilized to recognize human actions. The experimental results on Microsoft Research Action3D dataset demonstrate that our proposed method can achieve the state-of-the-art performance for human activity recognition due to the precise feature extraction of contour let transform on the DMMs.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.