Abstract

Mouth images are difficult to synthesize because they vary greatly according to different illumination, size and shape of mouth opening, and especially visibility of teeth and tongue. Conventional approaches such as manipulating 3D model or warping images do not produce very realistic animation. To overcome these difficulties, we describe a method of producing large variations of mouth shape and gray-level appearance using a compact parametric appearance model, which represents both shape and gray-level appearance. We find the high correlation between shape model parameters and gray-level model parameters, and design a shape appearance dependence mapping (SADM) strategy that converts one to the other. Once mouth shape parameters are derived from speech analysis, a proper full mouth appearance can be reconstructed with SADM. Some synthetic results of representative mouth appearance are shown in our experiments, they are very close to real mouth images. The proposed technique can be integrated into a speech-driven face animation system. In effect, SADM can synthesize not only the mouth image but also different kinds of dynamic facial texture, such as furrow, dimple and cheekbone shadows.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call