Abstract

Sign languages are known as a natural means for verbal communication of the deaf and hard of hearing people. There is no universal sign language, and almost each country has its own national sign language and fingerspelling alphabet. Sign languages use visual-kinetic clues for human-to-human communication combining hand gestures with lips articulation and facial mimics. They also possess a special grammar that is quite different from that of speech-based spoken languages. Sign languages are spoken (silently) by a hundred million deaf people all over the world and the most popular are American (ASL), Chinese, Brazilian, Russian, and British Sign Languages; there are almost 140 such languages according to the Ethnologue. They do not have a natural written form, and there is a huge lack of electronic resources for them, in particular, vocabularies, audio-visual databases, automatic recognition and synthesis systems, etc. Thus, sign languages may be considered as non-written under-resourced spoken languages. In this paper, we present a computer system for text-to-sign language synthesis for the Russian and Czech Sign Languages.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.