Abstract

Unpredictability in robot behaviour can cause difficulties in interacting with robots. However, for social interactions with robots, a degree of unpredictability in robot behaviour may be desirable for facilitating engagement and increasing the attribution of mental states to the robot. To generate a better conceptual understanding of predictability, we looked at two facets of predictability, namely, the ability to predict robot actions and the association of predictability as an attribute of the robot. We carried out a video human-robot interaction study where we manipulated whether participants could either see the cause of a robot’s responsive action or could not see this, because there was no cause, or because we obstructed the visual cues. Our results indicate that when the cause of the robot’s responsive actions was not visible, participants rated the robot as more unpredictable and less competent, compared to when it was visible. The relationship between seeing the cause of the responsive actions and the attribution of competence was partially mediated by the attribution of unpredictability to the robot. We argue that the effects of unpredictability may be mitigated when the robot identifies when a person may not be aware of what the robot wants to respond to and uses additional actions to make its response predictable.

Highlights

  • We investigated the relationship between the visibility of the cause of a robot’s responsive actions and action predictability, attributed predictability, the social attributes associated with the robot and the participants’ intolerance of uncertainty

  • The predictability of a robot is often mentioned in HRI as an important quality of the robot [e.g., References 1, 43, 45, 52, 58, 69, 85], yet the current conceptual understanding of robot predictability is inadequate as the concept is multi-faceted, rather than a singular concept

  • This limits us in effectively taking robot predictability into account in the design of robot behaviour

Read more

Summary

Introduction

Depending on the perspective that is taken, various terms are being used to describe (the various stages in) how people understand robot behaviour This process itself is often referred to as explainability or interpretability [64]. Predictions, and the predictability of objects and processes in the environment, are believed to play a central role in how the brain solves the problem of causal inference (predictive processing, see References [16, 38, 39, 47, 56, 77]) This is based on the premise that the brain continually generates predictions on what input comes based on current input and learned associations [41, 46]. In the remainder of this section, we will provide a conceptualisation of predictability and how it relates to people predicting robot behaviour, based on insights from these cognitive theories.

Objectives
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.