Abstract

Recognizing human actions from video sequences is an active research area in computer vision. This paper describes an effective approach to generate compact and informative representations for action recognition. We design a new action feature descriptor inspired from Laban Movement Analysis method. An efficient preprocessing step based on view invariant human motion is presented. Our descriptor is applied in four known machine learning methods, Random Decision Forest, Multi-Layer Perceptron and Multi-class Support Vector Machines (One-Against-One and One-Against-All). Our proposed approach has been evaluated on two challenging benchmarks of action recognition, Microsoft Research Cambridge-12 (MSRC-12) and MSR-Action3D. We follow the same experimental settings to make a direct comparison between the four classifiers and to show the robustness of our descriptor vector. Experimental results demonstrate that our approach outperforms the state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call