Abstract

In the normal course of human interaction people typically exchange more than spoken words. Emotion is conveyed at the same time in the form of nonverbal messages. In this paper, we present a new perceptual model of mood detection designed to enhance a robot’s social skill. This model assumes 1) there are only two hidden states (positive or negative mood), and 2) these states can be recognized by certain facial and bodily expressions. A Viterbi algorithm has been adopted to predict the hidden state from the visible physical manifestation. We verified the model by comparing estimated results with those produced by human observers. The comparison shows that our model performs as well as human observers, so the model could be used to enhance a robot’s social skill, thus endowing it with the flexibility to interact in a more human-oriented way.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call