Abstract

Facial expression is one of the major cues for emotional communications between humans and robots. In this paper, we present emotional human robot interaction techniques using facial expressions combined with an exploration of other useful concepts, such as face pose and hand gesture. For the efficient recognition of facial expressions, it is important to understand the positions of facial feature points. To do this, our technique estimates the 3D positions of each feature point by constructing 3D face models fitted on the user. To construct the 3D face models, we first construct an Active Appearance Model (AAM) for variations of the facial expression. Next, we estimate depth information at each feature point from frontal- and side-view images. By combining the estimated depth information with AAM, the 3D face model is fitted on the user according to the various 3D transformations of each feature point. Self-occlusions due to the 3D pose variation are also processed by the region weighting function on the normalized face at each frame. The recognized facial expressions - such as happiness, sadness, fear and anger - are used to change the colours of foreground and background objects in the robot displays, as well as other robot responses. The proposed method displays desirable results in viewing comics with the entertainment robots in our experiments.

Highlights

  • Various types of robots ‐ such as intelligent service robots, entertainment robots, etc. ‐ are in various stages of development

  • We describe a new method for the recognition of facial expressions using an estimation of 3D feature points based on the Appearance Model (AAM)

  • To evaluate the performance of the proposed method, we first evaluated the fitting result of the proposed 3D AAM, because the estimated positions of the facial feature points www.intechopen.com

Read more

Summary

Introduction

Various types of robots ‐ such as intelligent service robots, entertainment robots, etc. ‐ are in various stages of development. Various types of robots ‐ such as intelligent service robots, entertainment robots, etc. ‐ are in various stages of development. One of the key issues for these robots is human‐robot interaction (HRI). For successful HRI, it is desirable for a robot to recognize and interact with the user’s facial expressions and pose, as well as their gestures and voice. This applies to entertainment robots: a new type of media machine with the ability to transfer various contents to audiences. Children can read and hear fairy tales, comics and sing songs through a robot. Almost all methods for HRI are developed for controlling robots, not for www.intechopen.com

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call