Abstract

Gesture recognition, as one of the most promising human–robot interaction approaches, has attracted research interests for a long time. Though many feasible methods have been proposed in this field, the current gesture modeling and recognition methods are deficient in expressiveness, naturalness, and efficiency. In this paper, we present a novel method of modeling and recognizing natural gestures in 3-D space. Inspired by the joint space modeling, which is frequently used in robotics, we decompose the gestures into information of joint movements, joint angles, and arm orientations. We calculate joint angles and detect joint movements with the orientation data measured by inertial measurement units. Numbers of gestures can be modeled by combining three types of criteria, namely, joint angle criteria, arm orientation criteria, and joint movement criteria. Our system can cope with the repetitive movements, which are common in gestures while rarely considered before. Different from the previous statistical learning or template matching approaches, our approach does not need training. User can add new gesture to the gesture set or edit existing gesture definitions conveniently. To evaluate our system, we conducted real-time gesture recognition experiments with ten subjects. Twelve gestures, including three static gestures and nine dynamic gestures, were modeled for the experiments. The average recognition accuracy of totaling 1560 gesture samples was 91.86%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call