Abstract

In recent years, we have explored the use of gaze—an important nonverbal communication signal and cue in everyday human-human interaction—for use with AI systems. Specifically, our work investigated whether an artificial agent, given the ability to observe human gaze, can make inferences on intentions, and how aspects of these inferences can be communicated to a human collaborator. We leveraged a range of humancomputer interaction techniques to inform the design of a gaze-enabled artificial agent that can predict and communicate predictions. In this paper, we include a snapshot of how AI and HCI can be brought together to inform the design of an explainable interface for an artificial agent. To conclude, we outline the challenges we faced when designing AI systems that incorporate nonverbal communication stemming from our work.

Highlights

  • Imagine walking up to a group of peers playing a competitive board game around a table

  • If the AI systems are able to make the same observations as you in the previous scenario, would the AI systems make similar inferences? Would these inferences be accurate and timely? Would they be able to explain how they have arrived at their deductions? What information and how much information should such intelligent systems include? Our published and ongoing body of work explores such questions from the perspective of ‘gaze awareness’—if intelligent systems can observe where humans are looking and understand the gaze behaviours within the context, would they be able to improve their interactions with their human counterparts better?

  • Nonverbal Communication in H­ uman-AI Interaction: Opportunities and ­Challenges 223 inferences on the opponent’s plans afforded by observing the ­opponent’s gaze, the player can adjust their own strategy according to the predictions if necessary

Read more

Summary

Introduction

Imagine walking up to a group of peers playing a competitive board game around a table (as shown in Scenario 1 below). Human-AI Interaction · Explainable Interfaces · Nonverbal Communication · Multimodal Input · Intention Recognition · Gaze Input

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call