Abstract

Human action analysis based on 3D imaging is an emerging topic. This paper presents an approach for the problem of action recognition using information from a number of action descriptors calculated from a skeleton fitted to the body of a tracked subject. In the proposed approach, a novel technique that automatically determines discriminative sequences of relative joint positions for each action class is employed. In addition, we use an extended formulation of the longest common subsequence algorithm as a similarity function, which allows the classifier to reliably find the best match for extracted features from noisy skeletal data. The proposed approach is evaluated using two existing datasets from the literature, one captured using a Microsoft Kinect camera and the other using a motion capture system. The experimental results show that the approach outperforms existing skeleton-based algorithms in terms of its classification accuracy and is more robust in the presence of noise when compared to the dynamic time warping algorithm for human action recognition.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.