Abstract
AbstractThe aim of this study was to develop an animated 3D facial model with gestures in order to assist the early language learners and for the subjects suffering from hearing disabilities.Text-to-speech (TTS) engine for Tamil language was built using syllable-based concatenation approach and integrated with 3D facial model to provide natural speech. In this work, facial parts were modeled using a set of polygons and control points that were identified to parameterize the model. Such expressions as Happy, Sad, Anger, and Fear were simulated through various gestures, including lip, eyebrow and jaw movements.The results indicated that 80%–85% of the gesture words were correctly identified by the children.The ultimate goal of the system is to assist the children with hearing disabilities for effective communication in their language learning process.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have