Reading partners’ actions correctly is essential for successful coordination, but interpretation does not always reflect reality. Attribution biases, such as self-serving and correspondence biases, lead people to misinterpret their partners’ actions and falsely assign blame after a surprise, or unexpected event. These biases further influence people’s trust in their partners, including machine partners (Muir, 1987; Madhavan & Wiegmann, 2004). Advances in robotics have allowed for robots to partner with people at work and be treated socially (Young, Hawkins, Sharlin & Igarashi, 2009). However, these advances may interfere with a person’s appropriate calibration of trust in robots (Parasuraman & Miller, 2004). A better understanding of attribution biases in the wake of an unexpected event may shed light on how trust develops in a robot partner. This study was built on a human coordination example to serve as a reference for future human-robot interactions. We posit that attribution biases lead people to blame their partner after experiencing a negative performance outcome, thus lowering their trust in the partner. Sixty participants (30 pairs) were tasked to coordinate with an unfamiliar human partner, to lift a 17.5 lb. box containing a 200ml cup of water filled to the brim, from the floor to a table, as quickly as possible without spilling water. Before the task, participants were told that the pair with the best performance would be rewarded; however, all pairs were told they did not achieve this. Participant pairs were randomly assigned to a surprise condition during which they heard a 250 Hz warning tone, or a baseline condition with no warning tone. Participants in both conditions were told to pause the task as quickly as possible if the warning tone was present. It was unknown to participants when or if a warning tone would occur. To assess participants’ trust in their partner, Muir’s (1987) trust questionnaire was administered twice, once after introducing the task to participants, and again after the coordination task was completed. To capture blame assignment, a scale based on Kim and Hinds (2006) was administered after participants were told they did not achieve the best performance. Results indicate participants were less likely to blame their partners for the negative outcome, compared to blaming themselves or the warning tone itself (in the surprise condition). Next, surprisingly, in the surprise condition, instead of experiencing a decrease of trust in a partner after the negative outcome, there was a significant increase in trust in their partners. No significant difference in trust was found in the baseline condition. Finally, results also indicate that initial trust in a partner is a significant predictor for how people assign blame. In general, the effects of attribution biases were not observed in the present study. Friendliness may be a factor in people’s assignment of blame; although participants were unfamiliar with one another, all participants were students at the same university. Second, shared experience during the surprise condition, including the chance to assess their partner’s behaviors in response to the warning tone, may have been a catalyst for increased trust in a partner. It is important to note that although physical differences between participants were not evaluated in this study, height may be a potential confounding factor in this task. These findings enlighten our understanding of physical human-robot coordination scenarios and trust in a partner.
Read full abstract