Abstract

Sign languages use visual-manual modality to convey information, thereby enabling communication between hearing-impaired people. However, there is no fully developed technology that enables the communication between hearing-impaired people and those with no hearing disabilities. Current approaches to sign language translation are based on videos and pictures that are difficult to edit after being recorded. Here, we propose a new framework based on speech recognition, natural language processing, and 3D virtual human technology. Our current method (1) provides a new translation method based on a visual human, and the gestures can be easily edited using the HamNoSys keyboard; (2) can be used in the translation of various languages, including Chinese sign language(CSL), American sign language (ASL), and British sign language(BSL); (3) achieves grammar conversion between sign and spoken languages. Through the preliminary test of a simple conversation, the unilateral translation from spoken to sign languages is achieved. Further improvements will be obtained by the incorporation of more vocabulary and grammar conversion rules.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.