Abstract
Within the past few years, there has been a great interest in face modeling for anaalysis (e.g. facial expressio recognition) and synthesis (e.g. virtual avatars). There are two primary approaches, the appearance models (AM) and the structure from motion (SFM). These approaches are extensively studied, and both approaches have limitations. We introduce a semi-automatic method for 3D facial appearance modeling from video that addresses previous problems. Four main novelties are proposed: (1) a 3D generative facial appearance model integrates both structure and appearance, (2) the model is learned in a semi-unsupervised manner from video sequences, greatly reducing the need for tedious manual pre-processing, (3) a constrained flow-based stochastic sampling technique improves specificity in the learning process, and (4) in the appearance learning step, we automatically select the most representative images from the sequence. By doing so, we avoid biasing the linear model, speed up processing and enable more tractable computations. Preliminary experiments of learning 3D facial appearance models from video are reported.
Highlights
Within the past few years, there has been great interest in face modeling for analysis and synthesis
This aspect is dramatic as the face undergoes deep changes in appearance due to variations in expression that may be either qualitative as well as quantitative, which can seriously bias any parameter estimation
We propose a generative model that is robust to intensity changes in appearance, takes into account structure and appearance, and learns model parameters in a semi-supervised manner
Summary
Within the past few years, there has been great interest in face modeling for analysis (e.g. facial expression recognition) and synthesis (e.g. virtual avatars). Among various approaches to modeling 3D faces from video, two of the most popular and commonly used are based on appearance models (AM) [2, 4, 8, 9, 17] and rigid/nonrigid structure from motion (SFM) [5,. While the AM approach overcomes the problem of appearance change by explicitly introducing linear variation of intensity and shape, it incurs other challenges. AM approaches do not necessarily decouple the rigid/non-rigid motion in the fitting process, since a single shape basis models both of them. AM approaches require a labeled training set to learn face appearance. We propose a generative model that is robust to intensity changes in appearance, takes into account structure and appearance, and learns model parameters in a semi-supervised manner.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have