Abstract
Software to generate American Sign Language (ASL) automatically can provide benefits for deaf people with low English literacy. However, modern computational linguistic software cannot produce important aspects of ASL signs and verbs. Better models of spatially complex signs are needed. Our goals are: to create a linguistic resource of ASL signs via motion-capture data collection; to model the movement paths of inflecting/indicating verbs using machine learning and computational techniques; and to produce grammatical, natural looking and understandable animations of ASL. Our methods include linguistic annotation of the data and evaluation by native ASL signers. This summary also describes our research progress.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have