Abstract

The role of intelligent agents becomes more social as they are expected to act in direct interaction, involvement and/or interdependency with humans and other artificial entities, as in Human-Agent Teams (HAT). The highly interdependent and dynamic nature of teamwork demands correctly calibrated trust among team members. Trust violations are an inevitable aspect of the cycle of trust and since repairing damaged trust proves to be more difficult than building trust initially, effective trust repair strategies are needed to ensure durable and successful team performance. The aim of this study was to explore the effectiveness of different trust repair strategies from an intelligent agent by measuring the development of human trust and advice taking in a Human-Agent Teaming task. Data for this study were obtained using a task environment resembling a first-person shooter game. Participants carried out a mission in collaboration with their artificial team member. A trust violation was provoked when the agent failed to detect an approaching enemy. After this, the agent offered one of four trust repair strategies, composed of the apology components explanation and expression of regret (either one alone, both or neither). Our results indicated that expressing regret was crucial for effective trust repair. After trust declined due to the violation by the agent, trust only significantly recovered when an expression of regret was included in the apology. This effect was stronger when an explanation was added. In this context, the intelligent agent was the most effective in its attempt of rebuilding trust when it provided an apology that was both affective, and informational. Finally, the implications of our findings for the design and study of Human-Agent trust repair are discussed.

Highlights

  • In a wide variety of domains, such as healthcare, military, transport, and in and around the regular household, autonomous systems are increasingly deployed as teammates rather than tools

  • The results of this study show that apologies including an expression of regret were most effective in repairing trust after a trust violation in a human-agent teaming setting

  • Expressing regret is typically perceived as a human-like quality, these results suggest that saying sorry makes a difference in rebuilding trust when it comes from a non-human agent

Read more

Summary

Introduction

In a wide variety of domains, such as healthcare, military, transport, and in and around the regular household, autonomous systems are increasingly deployed as teammates rather than tools. Technology is no longer merely viewed as a tool to achieve a certain goal, but people create unique social relationships with automated and autonomous entities [7, 30, 51]. This requires a more comprehensive set of social skills. Equipping autonomous systems with teaming capabilities is changing the way in which people interact with them

Objectives
Methods
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call