Abstract

Sign language (SL), which is a highly visual–spatial, linguistically complete, and natural language, is the main mode of communication among deaf people. Described in this paper are two different American Sign Language (ASL) word recognition systems developed using artificial neural networks (ANN) to translate the ASL words into English. Feature vectors of signing words taken at five time instants were used in the first system, while histograms of feature vectors of signing words were used in the second system. The systems use a sensory glove, Cyberglove™, and a Flock of Birds ® 3-D motion tracker to extract the gesture features. The finger joint angle data obtained from strain gauges in the sensory glove define the hand shape, and the data from the tracker describe the trajectory of hand movement. In both systems, the data from these devices were processed by two neural networks: a velocity network and a word recognition network. The velocity network uses hand speed to determine the duration of words. Signs are defined by feature vectors such as hand shape, hand location, orientation, movement, bounding box, and distance. The second network was used as a classifier to convert ASL signs into words based on features or histograms of these features. We trained and tested our ANN models with 60 ASL words for a different number of samples. These methods were compared with each other. Our test results show that the accuracy of recognition of these two systems is 92% and 95%, respectively.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call