Abstract

This paper describes parameterization of mouth images for an image-based facial animation system. The analysis part of the facial animation system produces a face model, which is composed of a personalized mask as well as a large database of mouth images and their related phonetic and visual information. Then, a photo-realistic talking head is synthesized by rendering a personalized mask textured with a mouth image, which is selected from the database. The selection is driven by a unit selection algorithm, which finds the appropriate mouth images from the database such that they match the words spoken by the talking head. The selection of mouth images is based on parameters describing the mouth images. Therefore, the parameterization of mouth images is the key part for creating a photo-realistic facial animation. Hereby the visual parameterization of mouth images by LLE (Locally Linear Embedding) is investigated comprehensively and compared with PCA (Principal Component Analysis). Experimental results show that the parameterization of mouth images by LLE performs better for an image-based facial animation system than by PCA.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call