Abstract

We propose a novel algorithm for the high-resolution modeling of dynamic 3D facial expressions from a sequence of unstructured face point clouds captured at video rate. The algorithm can reconstruct not only the global facial deformations caused by muscular movements, but also the expressional details generated by local skin deformations. Our algorithm consists of two parts: Extraction of expressional details and Reconstruction of expressions. In the extraction part, we extract the subtle expressional details such as wrinkles and folds from each point cloud with Laplacian smooth operator. In the reconstruction part, we use a multi-scale deformable mesh model to match each point cloud to reconstruct time-varying expressions. In each matching, we first use the low-scale mesh to match the global deformations of point cloud obtained after filtering out the expressional details, and then use the high-scale mesh to match the extracted expressional details. Comparing to many existing non-rigid ICP-based algorithms that match directly the mesh model to the entire point cloud, our algorithm overcomes the probable large errors occurred where the local sharp deformations are matched since it extracts the expressional details for separate matching, therefore, our algorithm can produce a high-resolution dynamic model reflecting time-varying expressions. Additionally, utilization of multi-scale mesh model makes our algorithm achieve high speed because it decreases iterative optimizations in matching. Experiments demonstrate the efficiency of our algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call