The success of human-AI teams (HATs) requires humans to work with AI teammates in trustful ways over a certain time period. However, how trust evolves and changes dynamically in response to human-AI team interactions is generally understudied. This work explores the evolvement of trust in HATs over time by analyzing 45 participants' experiences of trust or distrust in an AI teammate prior to, during, and after collaborating with AI in a three-member HAT. Our findings highlight that humans' expectations of AI's ability, integrity, benevolence, and adaptability influence their initial trust in AI before collaboration. However, this initial trust can be maintained or revised through the development of situational trust during collaboration in response to the AI teammate's communication behaviors. Further, the trust developed through collaboration can impact individuals' subsequent expectations of AI's ability and their collaborations with AI. Our findings also reveal the similarities and differences in the temporal dimensions of trust for AI and human teammates. We contribute to CSCW community by offering one of the first empirical investigations into the dynamic and temporal dimension of trust evolvement in HATs. Our work yields insights into the pathways to expanding the methodological toolkit for investigating the development of trust in HATs, formulating theories of trust for the HAT context. These insights further inform the effective design of AI teammates and provide guidance on the timing, content, and methods for calibrating trust in future human-AI collaboration contexts.
Read full abstract