Abstract

We present a novel facial expression recognition framework using audio-visual information analysis. We propose to model the cross-modality data correlation while allowing them to be treated as asynchronous streams. We also show that our framework can improve the recognition performance while significantly reducing the computational cost by avoiding redundant or insignificant frame processing by incorporating auditory information. In particular, we design a single good image representation of image sequence by weighted sums of registered face images where the weights are derived using auditory features. We use a still image based technique for the expression recognition task. Our framework, however, can be generalized to work with dynamic features as well. We performed experiments using eNTERFACE'05 audio-visual emotional database containing six archetypal emotion classes: Happy, Sad, Surprise, Fear, Anger and Disgust. We present one-to-one binary classification as well as multi-class classification performances evaluated using both subject dependent and independent strategies. Furthermore, we compare multi-class classification accuracies with those of previously published literature which use the same database. Our analyses show promising results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call