Abstract

The indispensable means of communication for deaf people is sign language. Given the familiarity lack of hearing people with the specific language practiced by deaf people, establishing an interpretation system which make easier the communication between deaf people and the social environment gives the impression of being necessary. The main challenge in such a system is to identify each sign in continuous sign language videos. Therefore, this work presents a computer vision based system to recognize the signs in continuous sign language video. This system is based on two main phases ; sign words extraction and their classification. The most challenging task in this process is separating sign words from video sequences. For this purpose, we present a new algorithm able to detect accurate words boundaries in a continuous sign language video. Using hand shape and motion features, this algorithm extract isolate signs from video and it shows better efficiency compared to other works presented in the literature. In the recognition phase, the extracted signs are classified and recognized using Hidden Markov Model (HMM) and it has been strongly adopted after testing other approaches such as Independent Bayesian Classifier Combination (IBCC). Our system manifests auspicious performance with recognition accuracy of 95.18% for one gestures and 93.87% for two hand gestures. Comparing to systems using only manual features, the proposed framework reaches 2.24% and 2.9% progress on one and two hand gestures respectively, when employing head pose and eye gaze features. These results are reached based on dataset containing 33 isolated signs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call