Abstract

Human gesture recognition is of importance for smooth and efficient human robot interaction. One of difficulties in gesture recognition is that different actors have different styles in performing even same gestures. In order to move towards more realistic scenarios, a robot is required to handle not only different users, but also different view points and noisy incomplete data from onboard sensors on the robot. Facing these challenges, we propose a new invariant representation of rigid body motions, which is invariant to translation, rotation and scaling factors. For classification, Hidden Markov Models based approach and Dynamic Time Warping based approach are modified by weighting the importances of body parts. The proposed method is tested with two Kinect datasets and it is compared with another invariant representation and a typical non-invariant representation. The experimental results show good recognition performance of our proposed approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call