Abstract

Autonomy is becoming increasingly integrated into everyday life. For humans to fully realize the benefits of working alongside autonomy, appropriate trust toward autonomous partners will be necessary. However, research is needed to determine how humans respond to autonomous partners when they behave (un)expectedly. Thus, a series of studies were designed to investigate the effect of framed social intent of an autonomous teammate (Study 1), their unstated behavioral manifestations (Study 2), and the interaction of these variables (Study 3) on participants' trustworthiness perceptions, reliance intentions, and trust behaviors. Participants were to imagine themselves partnering with an autonomous teammate in a team-based, gamified collaboration. Key innovations in this task involved role-based vulnerability (necessitating clear expectations regarding one's stated social intent) and teammate interdependence. Across studies, framed social intent and observable behaviors of an autonomous agent were manipulated. Results demonstrated robust effects of said manipulations, with interactions demonstrating the nuance of (un)met expectations on criteria. We conclude by offering future research and design suggestions aimed at enhancing human-autonomy interactions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call