Abstract

Feature selection is a crucial factor in Kinect-based pattern recognition, including common human gesture recognition. For Kinect-based human gesture recognition, the information contained in the feature extracted for gesture recognition is conventionally the (x,y,z) coordinates of the primary joints in the human body. However, such traditionally used feature information containing only joint positions is apparently insufficient for clearly describing the characteristics of human activity patterns. This paper proposes a feature design scheme involving hybridizations of joint positions and joint angles for human gesture recognition with the Kinect camera. The presented feature design method effectively hybridizes the 20 main human joint positions captured by the Kinect camera and the joint angle information of 12 critical joints, along with significant angle variations when a gesture is made. The method is employed in dynamic time warping (DTW) gesture recognition. When the proposed feature design method is used for Kinect-based DTW human gesture recognition, it derives an appropriately sized feature vector for each of the gesture categories in the DTW-referenced template database according to the activity characteristics of a certain category of gestures. Experiments on Kinect-based DTW gesture recognition involving 14 common categories of human gestures show that the feature determined using the proposed approach is superior to that obtained using the conventional approach, which considers only the joint position information.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call