Abstract

In order to improve the link between an operator and its machine, some human oriented communication systems are now using natural languages like speech or gesture. The goal of this paper is to present a gesture recognition system based on the fusion of measurements issued from different kind of sources. It is necessary to have some sensors that are able to capture at least the position and the orientation of the hand such as Dataglove and a video camera. Datagloge gives a measure of the hand posture and a video camera gives a measure of the general arm gesture which represents the physical and spatial properties of the gesture, and based on the 2D skeleton representation of the arm. The measurements used are partially complementary and partially redundant. The application is distributed on intelligent cooperating sensors. The paper presents the measurement of the hand and the arm gestures, the fusion processes, and the implementation solution.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.