Abstract

Automatic emotion recognition from facial expression is one of the most intensively researched topics in affective computing and human-computer interaction. However, it is well known that due to the lack of 3-D feature and dynamic analysis the functional aspect of affective computing is insufficient for natural interaction. In this paper, we present an automatic emotion recognition approach from video sequences based on a fiducial point controlled 3-D facial model. The facial region is first detected with local normalization in the input frames. The 26 fiducial points are then located on the facial region and tracked through the video sequences by multiple particle filters. Depending on the displacement of the fiducial points, they may be used as landmarked control points to synthesize the input emotional expressions on a generic mesh model. As a physics-based transformation, elastic body spline technology is introduced to the facial mesh to generate a smooth warp that reflects the control point correspondences. This also extracts the deformation feature from the realistic emotional expressions. Discriminative Isomap-based classification is used to embed the deformation feature into a low dimensional manifold that spans in an expression space with one neutral and six emotion class centers. The final decision is made by computing the nearest class center of the feature space.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.