Abstract

AbstractOne of the challenging issues in affective computing is to give a machine the ability to recognize the mood of a person. Efforts in that direction have mainly focused on facial and oral cues. Gestures have been recently considered as well, but with less success. Our aim is to fill this gap by identifying and measuring the saliency of posture features that play a role in affective expression. As a case study, we collected affective gestures from human subjects using a motion capture system. We first described these gestures with spatial features, as suggested in studies on dance. Through standard statistical techniques, we verified that there was a statistically significant correlation between the emotion intended by the acting subjects, and the emotion perceived by the observers. We used Discriminant Analysis to build affective posture predictive models and to measure the saliency of the proposed set of posture features in discriminating between 4 basic emotional states: angry, fear, happy, and sad. An information theoretic characterization of the models shows that the set of features discriminates well between emotions, and also that the models built over‐perform the human observers. Copyright © 2004 John Wiley & Sons, Ltd.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.