Abstract
In the normal course of human interaction people typically exchange more than spoken words. Emotion is conveyed at the same time in the form of nonverbal messages. In this paper, we present a new perceptual model of mood detection designed to enhance a robot’s social skill. This model assumes 1) there are only two hidden states (positive or negative mood), and 2) these states can be recognized by certain facial and bodily expressions. A Viterbi algorithm has been adopted to predict the hidden state from the visible physical manifestation. We verified the model by comparing estimated results with those produced by human observers. The comparison shows that our model performs as well as human observers, so the model could be used to enhance a robot’s social skill, thus endowing it with the flexibility to interact in a more human-oriented way.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.