Abstract
The focus of this current research is two-fold: (1) to understand how team interaction in human-autonomy teams (HAT)s evolve in the Remotely Piloted Aircraft Systems (RPAS) task context, and (2) to understand how HATs respond to three types of failures (automation, autonomy, and cyber-attack) over time. We summarize the findings from three of our recent experiments regarding the team interaction within HAT over time in the dynamic context of RPAS. For the first and the second experiments, we summarize general findings related to team member interaction of a three-member team over time, by comparison of HATs with all-human teams. In the third experiment, which extends beyond the first two experiments, we investigate HAT evolution when HATs are faced with three types of failures during the task. For all three of these experiments, measures focus on team interactions and temporal dynamics consistent with the theory of interactive team cognition. We applied Joint Recurrence Quantification Analysis, to communication flow in the three experiments. One of the most interesting and significant findings from our experiments regarding team evolution is the idea of entrainment, that one team member (the pilot in our study, either agent or human) can change the communication behaviors of the other teammates over time, including coordination, and affect team performance. In the first and second studies, behavioral passiveness of the synthetic teams resulted in very stable and rigid coordination in comparison to the all-human teams that were less stable. Experimenter teams demonstrated metastable coordination (not rigid nor unstable) and performed better than rigid and unstable teams during the dynamic task. In the third experiment, metastable behavior helped teams overcome all three types of failures. These summarized findings address three potential future needs for ensuring effective HAT: (1) training of autonomous agents on the principles of teamwork, specifically understanding tasks and roles of teammates, (2) human-centered machine learning design of the synthetic agent so the agents can better understand human behavior and ultimately human needs, and (3) training of human members to communicate and coordinate with agents due to current limitations of Natural Language Processing of the agents.
Highlights
Teamwork can be defined as the interaction of two or more heterogeneous and interdependent team members working on a common goal or task (Salas et al, 1992)
In Remotely Piloted Aircraft Systems (RPAS) studies, we considered team communication flow to look at Human-Autonomy Teams (HAT) patterns of interaction and their variation over time by using Joint Recurrence Plots (JRPs)
Experimenter teams were more efficient than the control teams Synthetic and control teams performed to overcome the roadblocks, but poorer than the experimenter teams Synthetic teams pulled more information than they pushed, and pushing information was not as effective for their performance as the all-human teams. control and the experimenter teams did more pushing than pulling, and the pushing information which was effective with their performance Synthetic teams demonstrated stable coordination dynamics, while experimenter teams were moderately stable and the control teams were unstable Team performance increased across the missions Target processing efficiency increased across the missions Target process rating increased across the missions Teams demonstrated better performance on overcoming automation and autonomy failures than the malicious attacks
Summary
Teamwork can be defined as the interaction of two or more heterogeneous and interdependent team members working on a common goal or task (Salas et al, 1992). When team members interact dynamically with each other and with their technological assets to complete a common goal, they act as a dynamical system. Advancements in machine learning in the development of autonomous agents are allowing agents to interact more effectively with humans (Dautenhahn, 2007), to make intelligent decisions, and to adapt to their task context over time (Cox, 2013). Autonomous agents are increasingly considered team members, rather than tools or assets (Fiore and Wiltshire, 2016; McNeese et al, 2018) and this has generated research in team science on Human-Autonomy Teams (HAT)s
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.