Abstract

The sign languages used by deaf communities around the world represent a linguistic challenge that natural-language researchers in AI have only recently begun to take up. This challenge is particularly relevant to research in Machine Translation (MT), as natural sign languages have evolved in deaf communities into efficient modes of gestural communication, which differ from English not only in modality but in grammatical structure, exploiting a higher dimensionality of spatial expression. In this paper we describe Zardoz, an on-going AI research system that tackles the cross-modal MT problem, translating English text into fluid sign language. The paper presents an architectural overview of Zardoz, describing its central blackboard organization, the nature of its interlingual representation, and the major components which interact through this blackboard both to analyze the verbal input and generate the corresponding gestural output in one of a number of sign variants.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.