Abstract

Human action recognition is an important yet challenging task. In this paper, a simple and efficient method based on random forests is proposed for human action recognition. First, we extract the 3D skeletal joint locations from depth images. The APJ3D computed from the action depth image sequences by employing the 3D joint position features and the 3D joint angle features, and then clustered into K-means algorithm, which represent the typical postures of actions. By employing the improved Fourier Temporal Pyramid, we recognize actions using random forests. The proposed method is evaluated by using a public video dataset, namely UTKinect-Action dataset. This dataset is constituted of 200 3D sequences of 10 activities performed by 10 individuals in varied views. Experimental results show that the robustness of 3D skeletal joint location estimation display very well results, and the proposed method performs very well on that dataset. In addition, due to the design of our method and the robust 3D skeletal joint locations estimation from RGB-D sensor, our method demonstrates significant reliability against noise on 3D action dataset.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.