Abstract

Human action recognition is a challenging task due to the articulated and complex nature of actions. Recently developed commodity depth sensors coupled with the skeleton estimation algorithm have generated a renewed interest in human skeletal action recognition. In this paper, we characterize the human actions with a novel graph-based model which preserves complex spatial structure among skeletal joints according to their activity levels as well as the spatio-temporal joint features. In particular, the proposed top-K Relative Variance of Joint Relative Distance (RVJRD)s determine which joint pairs should be selected in the resulting graph according to normalized activity levels. In addition, the temporal pyramid covariance descriptors are adopted to represent joint locations. The graph kernel is used for measuring the similarity between two graphs by matching the walks from each of the two graphs to be matched. We evaluate the proposed approach on three challenging action recognition datasets captured by depth sensors. The experimental results show our proposed approach outperforms several state-of-the-art human skeletal action recognition approaches.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.