Abstract

Project overview Team resilience is an interactive and dynamic process that develops over time while a team maintains performance. This study aims to empirically investigate systems-level resilience in a Remotely Piloted Aircraft (RPA) System simulated task environment by examining team interaction during novel events. The approach used in the current study to measure systems-level resilience was developed by Hoffman & Hancock (2017). In their conceptual study, resilience was considered a key feature of success in emerging complex sociotechnical systems; in our case, that is applied to Human-Autonomy Teams (HATs). Hoffman and Hancock conceptualized a resilience measure dynamically by means of several components, such as the time it took the system to recognize and characterize anomalies, and the time taken to specify and achieve new goals. In their framework, there were two main sub-events which expressed resilience via time-based measures, and upon which we designed ours in this study: (1) time taken to design a new process and (2) time required to implement it (Hoffman & Hancock, 2017). Design In this current research, there were three heterogeneous team members who used a text-based system to communicate and successfully photograph target waypoints: (1) navigator – provided information regarding a flight plan with speed and altitude restrictions of each waypoint; (2) pilot – controlled the RPA by adjusting its altitude and airspeed through negotiating with the photographer in order to take a good photo of the target waypoints; and (3) photographer – screened camera settings and sent feedback to the other team members regarding the status of target’s photograph. This study followed the Wizard of Oz paradigm wherein the navigator and photographer were seated together in one room and were told that the pilot was a synthetic agent. In actuality, the pilot was a well-trained experimenter who was working from a separate room. This ‘synthetic’ pilot used restricted vocabulary to simulate that of a computer. The main manipulations in this study consisted of three degraded conditions: (1) automation failure - role-level display failures while processing specific targets, (2) autonomy failure - autonomous agent behaved abnormally while processing specific targets (i.e., it provided misinformation to other team members or demonstrated incorrect actions), and (3) malicious cyber-attacks - the hijacking of the synthetic agent, which led to the synthetic agent providing false, detrimental information to the team about the RPA destination. Because the malicious cyber-attack only occurred once (during the final mission), we will focus on the automation and autonomy failures for this study. Each failure was imposed at a selected target waypoint and the teams had to find a solution in a limited amount of time. The time limit for each failure was related to the difficulty of the failure. Each failure was introduced at a pre-selected target waypoint for each team. Method In this experiment, there were 22 teams, with only two participants randomly assigned to the navigator and photographer roles for each team, because the pilot was a highly-trained experimenter. The current task was comprised of ten 40-minute missions in which teams needed to take as many “good” photos as possible of ground targets while avoiding alarms and rule violations. For this study, using the RPAS paradigm, we calculated two team resilience scores (1) time taken to design a new process and (2) time required to implement it (Hoffman & Hancock, 2017). For the calculations, we used the message sent time (in seconds) for each role to express resilience in terms of the proportion of total task time (2400 seconds). As an outcome measure, we used target processing efficiency as a coordination and time-based performance score, which was based on how quickly teams were able to take a good photo of each target. Results and discussion We found that teams were more resilient during automation failures and progressed toward targets more successfully than during autonomy failures. We see three possible explanations for this: (1) automation failures were more explicit than autonomy failures, since at least one team member interacted with other teammates; (2) autonomy failures took more time for human teammates to identify the failure, because the autonomous agent’s abnormal behavior was not as straight forward; and 3) human teammates overtrusted to the autonomous agent and lack confidence in themselves and let the failure go on. Acknowledgements This research is supported by ONR Award N000141712382 (Program Managers: Marc Steinberg, Micah Clark). We also acknowledge the assistance of Steven M. Shope of Sandia Research Corporation, who integrated the synthetic agent and the testbed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call