Abstract

Facial expression editing plays a fundamental role in facial expression generation and has been widely applied in modern film productions and computer games. While the existing 2-D caricature facial expression editing methods are mostly realized by expression interpolation from the original image to the target image, expression extrapolation has rarely been studied before. In this article, we propose a novel expression extrapolation method for caricature facial expressions based on the Kendall shape space, in which the key idea is to introduce a representation for the 3-D expression model to remove rigid transformations, such as translation, scaling, and rotation, from the Kendall shape space. Built upon the proposed representation, the 2-D caricature expression extrapolation process can be controlled by the 3-D model reconstructed from the input 2-D caricature image and the exaggerated expressions of the caricature images generated based on the extrapolated expression of a 3-D model that is robust to facial poses in the Kendall shape space; this 3-D model can be calculated with tools such as exponential mapping in Riemannian space. The experimental results demonstrate that our method can effectively and automatically extrapolate facial expressions in caricatures with high consistency and fidelity. In addition, we derive 3-D facial models with diverse expressions and expand the scale of the original FaceWarehouse database. Furthermore, compared with the deep learning method, our approach is based on standard face datasets and avoids the construction of complicated 3-D caricature training sets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call