Abstract

Listening to one another is essential to human-human interaction. In fact, we humans spend a substantial part of our day listening to other people, in private as well as in work settings. Attentive listening serves the function to gather information for oneself, but at the same time, it also signals to the speaker that he/she is being heard. To deduce whether our interlocutor is listening to us, we are relying on reading his/her nonverbal cues, very much like how we also use non-verbal cues to signal our attention. Such signaling becomes more complex when we move from dyadic to multi-party interactions. Understanding how humans use nonverbal cues in a multi-party listening context not only increases our understanding of human-human communication but also aids the development of successful human-robot interactions. This paper aims to bring together previous analyses of listener behavior analyses in human-human multi-party interaction and provide novel insights into gaze patterns between the listeners in particular. We are investigating whether the gaze patterns and feedback behavior, as observed in the human-human dialogue, are also beneficial for the perception of a robot in multi-party human-robot interaction. To answer this question, we are implementing an attentive listening system that generates multi-modal listening behavior based on our human-human analysis. We are comparing our system to a baseline system that does not differentiate between different listener types in its behavior generation. We are evaluating it in terms of the participant’s perception of the robot, his behavior as well as the perception of third-party observers.

Highlights

  • While the idea of robots being an integral part of our society has been around for many years, it recently became much more immediate

  • There was very strong evidence (p < 0.001, adjusted using the Bonferroni correction) of a difference between the listener categories Side Participant (SPa) and Bys and a strong difference (p < 0.01) between Attentive Listener (ALi) and SPa in the amount of gaze the received from the speaker

  • 5.5.6 Results II: Participant’s Focus of Attention Figure 8 shows the time participants spent looking at the robot realizing the behaviors of the attentive listener system vs. the robot realizing the behaviors of the baseline system

Read more

Summary

Introduction

While the idea of robots being an integral part of our society has been around for many years, it recently became much more immediate. The following sections will provide an overview of multi-party listener modeling in human-robot interaction. It will summarize relevant findings around audio-visual feedback token and eyegaze in human-human interaction. The term participant is referring to anyone contributing to and being part of a conversation This includes the speaker, as well as the current addressee, but can include further people such as people are part of the group of possible speakers but who currently are taking on a listening role. These participants are being classified as sideparticipants. Both bystanders and overhearers are part of the non-participant group

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call