Abstract

Mutual gaze arises from the interaction of the gaze behavior of two individuals. It is an important part of all face-to-face social interactions, including verbal exchanges. In order for humanoid robots to interact more naturally with people, they need internal models that allow them to produce realistic social gaze behavior. The approach taken in this work is to collect data from human conversational pairs with the goal of learning a controller for robot gaze directly from human data. In a small initial data collection experiment, mutual gaze between pairs of people is detected and recorded in real time during conversational interaction. A Markov model representation of human gaze data is produced in order to demonstrate how this data could be used to create a controller. We also discuss how an algebraic analysis of the state transition structure of such models may reveal interesting properties of human gaze interaction. Results are also presented from a second, larger experiment in which mutual gaze is detected offline using recorded video data for greater accuracy. Trends in behavior linking gaze and speech in this data set are also discussed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call