Abstract

The study of using computers to understand and represent sign language has made significant strides in recent years. With robotic 'Ralph'-like finger-spelling hands, 'Cyber Gloves' (virtual reality sensors that collect both isolated and continuous signs), camera-based projects like the engaging Copycat American Sign Language game, and more, this essay covers significant efforts in the field. "sign recognition software," and "Hidden Markov Modelling and neural network systems." There are spoken language to sign language translation technologies like "Tessa" (Text and Sign Support Assistant; three-dimensional imaging) and "THETOS" (Text into Sign Language Automatic Translator, which operates in Polish; natural language processing), a Polish project. It is also investigated how this research might be used in education. The "ICICLE" (Interactive Computer Identification and Correction of Language Errors) project, for instance, builds a tutorial system for deaf or hard-of-hearing youngsters that analyses their English writing and provides specialized lessons and recommendations using intelligent computer-aided instruction. The article also takes into account synthesized signs, which are being included in instructional materials and may one day be created by students themselves.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call