Abstract

Future Artificial Intelligence (AI) teammates will need to take on more teaming and collaborative responsibilities in human-agent teams to advance those teams' capacities and improve performance. To do so, an AI will require artificial social intelligence (ASI) in order to effectively anticipate, predict, and respond to humans in ways that take into account factors related to context, individual cognition, team structures as well as the social, interpersonal team space. Theory of Mind is a core socio-cognitive process that is fundamental to supporting these social abilities in humans, and it must be developed for agents as Artificial Theory of Mind (AToM) that can support social behavior. An agent utilizing AToM models would be able to observe and infer human behavior and update their internal models to more effectively engage with human teammates based on the context of the interaction, like humans do naturally. The research reported here explores the interactions between AI imbued with Artificial Theory of Mind and teams of human participants completing simulated Urban Search and Rescue missions. The focus of our explorations are the relationships between the advisory interventions delivered by artificial, socially intelligent agents and the mission outcomes of the teams with which they worked. The gamified Urban Search and Rescue task employed for this research consisted of two missions per team during which participants searched for, triaged, and evacuated victims of a building collapse. Each three-person team was assigned an ASI agent who interacted with them during both missions. Critically, the agents were not given omniscient knowledge of the task, such as specific locations for task-related objectives, so the advice that they delivered to teams was based entirely on their artificial theory of mind and not rote problem solving. Of primary interest to this work is the nature of the advisory interventions delivered by the agents while assisting with the rescue missions. In this paper, we focus on exploring the interventions with attention to the nature of the content and delivery, and a particular interest in the interventions associated with team communication. The results of these analyses suggest that, overall, interventions were generally associated with positive outcomes rather than negative ones. Specifically, interventions advising teams to engage in information sharing and externalizing communication tended to relate positively to outcomes. That finding indicates that even early forms of artificial social intelligence have the potential to serve as teammates as opposed to be utilized as tools, and that artificial teammates can improve team performance. Further, the correlations between communication intervention types and mission performance reflect on how artificial social intelligence can support teams to more effectively engage in teaming activities, such as communication, which can benefit team performance outcomes. These findings are an important step towards investigating the impact of agents actively engaging in teaming behaviors, demonstrating an agent’s potential benefit to teamwork by supporting team communication and, additionally, identifying what factors may have negatively impacted performance and should be avoided to improve team effectiveness.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call