Abstract
Software to generate American Sign Language (ASL) automatically can provide benefits for deaf people with low English literacy. However, modern computational linguistic software cannot produce important aspects of ASL signs and verbs. Better models of spatially complex signs are needed. Our goals are: to create a linguistic resource of ASL signs via motion-capture data collection; to model the movement paths of inflecting/indicating verbs using machine learning and computational techniques; and to produce grammatical, natural looking and understandable animations of ASL. Our methods include linguistic annotation of the data and evaluation by native ASL signers. This summary also describes our research progress.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.