Abstract

Traditional studies in vision-based hand gesture recognition remain rooted in view-dependent representations, and hence users are forced to be fronto-parallel to the camera. To solve this problem, view-invariant gesture recognition aims to make the recognition result independent of viewpoint changes. However, in current works the view-invariance is achieved at the price of mixing different gesture patterns that have similar trajectory curve shape but different semantic meanings. For example, the gesture ‘push’ can be mistaken as ‘drag’ from another viewpoint. To address this shortcoming, in this study, the authors use a shape descriptor to extract the view-invariant features of a three-dimensional (3D) trajectory. As the shape features are invariant to omnidirectional viewpoint changes, the orientation features are then added into weight different rotation angles so that similar trajectory shapes are better separated. The proposed method was conducted on two different databases, including a popular Australian Sign Language database and a challenging Kinect Hand Trajectory database. Experimental results show that the proposed algorithm achieves a higher average recognition rate than the state-of-the-art approaches, and can better distinguish confusing gestures while meeting the view-invariant condition.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call