Abstract
Sign languages use visual-manual modality to convey information, thereby enabling communication between hearing-impaired people. However, there is no fully developed technology that enables the communication between hearing-impaired people and those with no hearing disabilities. Current approaches to sign language translation are based on videos and pictures that are difficult to edit after being recorded. Here, we propose a new framework based on speech recognition, natural language processing, and 3D virtual human technology. Our current method (1) provides a new translation method based on a visual human, and the gestures can be easily edited using the HamNoSys keyboard; (2) can be used in the translation of various languages, including Chinese sign language(CSL), American sign language (ASL), and British sign language(BSL); (3) achieves grammar conversion between sign and spoken languages. Through the preliminary test of a simple conversation, the unilateral translation from spoken to sign languages is achieved. Further improvements will be obtained by the incorporation of more vocabulary and grammar conversion rules.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have