Abstract

Hand gestures are universally adopted means of communication to convey message in the form of sign language. Therefore, to communicate with a deaf and dumb person, a normal human requires to have some knowledge about the sign language and should be able to make the sign language gestures. By understanding and animating hand gestures, we can help in facilitating communication between computers and the underprivileged. In this paper, we present a method for synthesizing hand gestures with the help of a computer which may enable a normal person to convey massage to a mute person more easily without any knowledge of sign language. The proposed technique requires to train the system prior to its operation. But, gesture animation is computationally complex as it involves replication of the hand with its 27 degrees of freedom. Gesture animation also involves gesture recognition. Hence, in this paper, we have implemented a gesture animation framework after recognizing hand gestures. Computational complexity has been significantly reduced by summarizing large gesture sequence in the form of key frames. The animation process includes hand parameter calculation for every pose in a gesture sequence which is obtained using information like position of fingers, location of metacarpophalangeal joints of the fingers and the bent angles of the fingers. By using these parameters, hand pose estimation is done by imposing some constraints of the hand. Subsequently, a gesture sequence is animated using these models. For this, the hand model for the frames in between the key frames are obtained by interpolation. In our experiment, we demonstrate gesture animation with hand pose exactly same as the real gesture.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call