Abstract
Software to generate animations of American Sign Language (ASL) has important accessibility benefits for the significant number of deaf adults with low levels of written language literacy. We have implemented a prototype software system to generate an important subset of ASL phenomena called "classifier predicates," complex and spatially descriptive types of sentences. The output of this prototype system has been evaluated by native ASL signers. Our generator includes several novel models of 3D space, spatial semantics, and temporal coordination motivated by linguistic properties of ASL. These classifier predicates have several similarities to iconic gestures that often co-occur with spoken language; these two phenomena will be compared. This article explores implications of the design of our system for research in multimodal gesture generation systems. A conceptual model of multimodal communication signals is introduced to show how computational linguistic research on ASL relates to the field of multimodal natural language processing.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have