Abstract

Signing avatars make it possible for deaf people to access information in their preferred language. However, sign language synthesis represents a challenge for the computer animation community as the motions generated must be realistic and have a precise semantic meaning. In this article, we distinguish the synthesis of isolated signs deprived of any contextual inflections from the generation of full sign language utterances. In both cases, the animation engine takes as input a representation of the synthesis objective to create the final animation. Because of their spatiotemporal characteristics, signs and utterances cannot be described by a sequential representation like phonetics in spoken languages. For this reason, linguistic and gestural studies have aimed to capture the typical and special features of signs and sign language syntax to promote different sign language representations. Those sign representations can then be used to produce an avatar animation thanks to sign synthesis techniques based on keyframes, procedural means or data-driven approaches. Novel utterances can also be generated using concatenative or articulatory techniques.This article constitutes a survey of (i) the challenges specific to sign languages avatars, (ii) the sign representations developed in order to synthesize isolated signs, (iii) the possible sign synthesis approaches, (iv) the different utterance specifications, and (v) the challenges and animation techniques for generating sign language utterances.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call