Abstract

As domestic service robots become more prevalent and act autonomously, conflicts of interest between humans and robots become more likely. Hereby, the robot shall be able to negotiate with humans effectively and appropriately to fulfill its tasks. One promising approach could be the imitation of human conflict resolution behaviour and the use of persuasive requests. The presented study complements previous work by investigating combinations of assertive and polite request elements (appeal, showing benefit, command), which have been found to be effective in HRI. The conflict resolution strategies each contained two types of requests, the order of which was varied to either mimic or contradict human conflict resolution behaviour. The strategies were also adapted to the users’ compliance behaviour. If the participant complied after the first request, no second request was issued. In a virtual reality experiment (N = 57) with two trials, six different strategies were evaluated regarding user compliance, robot acceptance, trust, and fear and compared to a control condition featuring no request elements. The experiment featured a human-robot goal conflict scenario concerning household tasks at home. The results show that in trial 1, strategies reflecting human politeness and conflict resolution norms were more accepted, polite, and trustworthier than strategies entailing a command. No differences were found for trial 2. Overall, compliance rates were comparable to human-human-requests. Compliance rates did not differ between strategies. The contribution is twofold: presenting an experimental paradigm to investigate a human-robot conflict scenario and providing a first step to developing acceptable robot conflict resolution strategies based on human behaviour.

Highlights

  • Service robots are still small and limited in their functions but soon will become larger, more versatile, and autonomous [1,2]

  • Significant group differences between conditions were found with contrast testing for trial 1: strategies that fulfilled the human politeness scheme, were rated as more acceptable (ANOVA: F(9, 56) = 3.7, p < .001; contrast: t(47) = 3.4, p < .001, Cohen’s d = 0.99), politer (ANOVA: F(9, 56) = 3.1, p < .01; contrast: t(47) = 1.8, p < .05, Cohen’s d = 0.53) and more trustworthy (ANOVA: F(9, 56) = 2.5, p < .05; contrast: t(16) = 2.3, p < .05, Cohen’s d = 1.15, df corrected for unequal variances) than the com-pol and com-ben strategies

  • Different combinations of assertive and polite request elements applied by a humanoid service robot were tested for user compliance and acceptance

Read more

Summary

Introduction

Service robots are still small and limited in their functions but soon will become larger, more versatile, and autonomous [1,2]. This will change their social role from simple, task-performing robots to sociable household members [3,4] which increases the likelihood of human-robot conflicts (e.g., goals, priorities) [5,6]. In the case of an emergency situation, it could even be dangerous if the robot is programmed to always being submissive (e.g., not raising an alarm to not interrupt the owner). Scenarios like these illustrate the importance of robot assertiveness for future HRI and, for robot interaction design

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call