Abstract

Abstract3D sign language generation has showed real performances since several years. Many systems have been proposed aiming to generate animated sign language through avatars, however, the technology still young and many fundamental parameters of sign language like facial expressions and other iconic features have been ignored in the proposed systems. In this paper, we focus on the generation and analysis of descriptive classifiers also called Size and Shape Specifiers (SASSes) in 3D sign language data. We propose a new adaptation of the phonological structure of handshapes that have been given by Brentari. Our adapted framework is able to encode 3D descriptive classifiers that can express different amounts or sizes of shapes. We describe the way our model has been implemented through an XML framework. Our model is a way to link the phonological level with the 3D physical animation level since it is compliant with sign language phonology as described by Brentari as well as Liddel & Johnson and compliant with the 3D animation standards.Keywords3D Sign LanguageClassifiersphonology

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.