Abstract

Sign languages are used all over the world as a primary means of communication by deaf people. Sign language translation is a promising application for vision-based gesture recognition methods. Therefore, it is need such a tool that can translate sign language directly. This paper aims to create a system that can translate static sign language into textual form automatically based on computer vision. The method contains three phases, i.e. segmentation, feature extraction, and recognition. We used Generic Fourier Descriptor (GFD) as feature extraction method and K-Nearest Neighbour (KNN) as classification approach to recognize the signs. The system was applied to recognize each 120 stored images in database and 120 images which is captured real time by webcam. We also translated 5 words in video sequences. The experiment revealed that the system can recognized the signs with about 86 % accuracy for stored images in database and 69 % for testing data which is captured real time by webcam.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call