Abstract

Lip synchronization is a technical term for matching lip movements with sound. Drawings, clay puppets and computer meshed avatar do not talk, so when the characters are required to say something, their dialogue has to be recorded and analysed first before keyframe and animate them speaking. Therefore, the creation of lip sync animation is particularly challenging in mapping the lip animation movement and sounds that are synchronized. A research on performing lip synching animation in real time is conducted and a framework is designed for development of automated digital speech system. Performance of real-time lip sync animation is an approach to perform a virtual computer generated character, which synchronizes an accurate lip movement and sound in live. Visemes are used as an method to the basic animation parameters in estimating visual similarities between different phonemes. The study of visemes includes speech processing, speech recognition and computer facial animation based on human speech. As an results, the research framework is important to lip sync animation for applying viseme based human lip shapes in mapping the mouth and sound that are synchronized in real time animation. It also as a guide for lip sync animation with the use of simple synchronization tricks which generally improve accuracy and realistic visual impression and implementation of advanced features into lip synchronization application.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call