Abstract

To improve the link between operators and equipment, communication systems have begun using natural (user-oriented) languages such as speech and gestures. Our goal is to present gesture recognition based on the fusion of measurements from different sources. Sensors must be able to capture at least the location and orientation of the hand, as is done by Dataglove and a video camera. Dataglove gives the hand position and the video camera gives the general arm gesture representing the gesture’s physical and spatial properties based on the two-dimensional (2D) skeleton representation of the arm. Measurement is partly complementary and partly redundant. The application is distributed over intelligent cooperating sensors. We detail the measurement of hand positioning and arm gestures, fusion processes, and implementation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call