Abstract

Automatic human action recognition is an interesting and challenging problem in computer vision. Furthermore, it has wide range of real-world applications such as human-machine interaction, surveillance system, data-driven automation, smart home and robotics. In the recent years, the availability of 3D sensors has recently made it possible to capture depth maps in real time, which simplifies variety of visual recognition tasks, including action classification, 3D reconstruction, etc? We address here the problem of human action recognition in depth sequences. On one hand, we present a novel for human action recognition based on 2DPCA applied to DMM. Then, we project feature matrixes into difference spaces to create robust action representation. Finally, we use GA to create the coefficients for multi- SVM classifiers. Our approach is systematically examined on benchmark datasets such as MSR-Gesture3D and MSR-Action3D which results with an overall accuracy of 91.32% on the MSR-Action3D and 94.89% on the MSR-Gesture3D. The experimental results also indicated that the extraction of features and representation are effective and shows the effective of our proposal.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call