Route planning is a critical behavior for human-intelligent agent (H-IA) team mobility. The scientific community has made major advances in improving route planner optimality and speed. However, human factors, such as the ability to predict and understand teammates’ actions and goals, are necessary for trust development in H-IA teams. Trust is especially critical when agents’ behaviors do not match human team members’ expectations, or the human cannot understand the agent's underlying reasoning process. To address this issue, the artificial intelligence community has pushed toward creating human-like agent behaviors using machine learning. The problem with this approach is that we do not yet have a clear understanding of what constitutes human-like behavior across the breadth of tasks that H-IA teams undertake. This article describes an investigation and comparison of human and agent route planning behaviors, the interplay between humans and agents in collaborative planning, and the role of trust in this collaborative process. Finally, we propose a data-driven methodology for characterizing and visualizing differences among routes planned by humans and agents. This methodology provides a means to advance compatible mental model metrics and theory by informing targeted transparency manipulations, thereby improving the speed and quality of routes produced by H-IA teams.
Read full abstract