Abstract

Recent research in psychology argue the importance of “context” in emotion perception. According to these recent studies, facial expressions do not possess discrete emotional meanings; rather the meaning depends on the social situation of how and when the expressions are used. These research results imply that the emotion expressivity depends on the appropriate combination of context and expression, and not the distinctiveness of the expressions themselves. Therefore, it is inferable that relying on facial expressions may not be essential. Instead, when appropriate pairs of context and expression are applied, emotional internal states perhaps emerge. This paper first discusses how facial expressions of robots limit their head design, and can be hardware costly. Then, the paper proposes a way of expressing context-based emotions as an alternative to facial expressions. The paper introduces the mechanical structure for applying a specific non-facial contextual expression. The expression was originated from Japanese animation, and the mechanism was applied to a real desktop size humanoid robot. Finally, an experiment on whether the contextual expression is capable of linking humanoid motions and its emotional internal states was conducted under a sound-context condition. Although the results are limited in cultural aspects, this paper presents the possibilities of future robotic interface for emotion-expressive and interactive humanoid robots.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call