Abstract
This paper proposes a method which analyzes an expression from a facial image using a three-dimensional facial model and then extracts the facial expression. First, the head motion and the facial actions (such as those of eyebrows, eyes, and lips) are separated from the facial image. This is realized by estimating the three-dimensional motion of the face based on the three-dimensional facial model and by compensating the motion. Next, the expression information is extracted from the separated facial actions in two ways. One is the method to extract successively the facial expressions considering the characteristics of the facial actions based on the facial muscles. The other is the method to estimate the facial expression as a whole using the least-square method and regarding the facial actions by the facial muscles as a vector. Those methods are combined with the expression synthesis rules. This makes it possible to reconstruct the original expression from the extracted facial expression parameters. Finally, the result of the analysis of the facial expression from the actual image is compared to the result of evaluation by a psychologist to demonstrate the usefulness of the proposed method. The image reconstructed from the result of analysis also is compared with the original image.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have