Abstract
Project overview. The current research aims to understand how human operators effectively team with urban search robot teammates in a dynamic and complex task environment. With that in mind, we examined how shared cognition and restricted language capabilities impact performance of human dyadic teams using a simulated Minecraft task environment. In this human dyadic team, an internal teammate (comparable to robot) identifies the location of victims while navigating inside a game environment that reflects a collapsed building; and an external teammate (comparable to operator) sees their teammate's actions from a different screen and guides them through the environment, tracking the location of victims on a map as they go. In order to examine the effects of language and shared cognition, a two by two design was chosen: (1) in the communication manipulation, participants are either able to communicate using natural language or the internal participant’s communication is limited to three-word utterances; and (2) in the shared cognition manipulation, either the internal participant is made fully aware of the external participant’s restricted representation of the environment and inaccurate map or the internal is unaware of these challenges. Method. This study used a simulated search task, in the Minecraft gaming environment, where two humans acted as a human-robot search team (Bartlett & Cooke, 2015). After signing consent forms, two randomly selected participants completed a half-hour training session for their individual role. Both participants sat in the same room but had a divider between them, and then, interacted to complete a 15-minute simulated search task. The Minecraft environment resembled an office structure with interspersed green, blue, and pink blocks that represented potential targets. Blue and green blocks were meant to represent survivors, whereas pink blocks represented hazards. Pressing a button on green blocks counted positively towards the team’s performance, whereas pressing a button on pink blocks counted negatively towards the team’s performance. Blue blocks were time-sensitive, such that pressing a button on them before eight minutes into the scenario counted positively toward performance, but pressing the button after that time counted against performance. Pressing the button on any block more than once counted negatively towards the team’s performance. A map of this environment was also made available. Inconsistencies were intentionally introduced, such as missing walls, additional walls, and misplaced doorways to simulate a damaged building, none of which were depicted on the map. Due to the dynamic nature of the task, effective communication and coordination between the dyads is required for effective performance. Several measures were obtained in this research: team performance, situation awareness, NASA TLX workload, team verbal behaviors, team communication flow, and demographics. In the interest of space, we only present team performance, a determinism measure (served as an index of flexible behavior and was estimated from team communication flow, using Recurrence Quantification Analysis (Marwan, Carmen Romano, Thiel, & Kurths, 2007), and NASA TLX workload. Results and conclusion. The primary findings from this study are that: 1) teams in the natural language and shared model conditions, performed better than teams with the limited language and restricted model, respectively; 2) when the internal participant is unaware of the challenges of the external, the external perceives higher workload than when there is a shared cognition; 3) teams with natural language and shared model demonstrated more predictable behavior than the other teams; 4) some amount of systems predictability is good but too much predictability is not good in the system – this also confirms another study: Demir, Likens, Cooke, Amazeen, & McNeese, InReview. Overall these results indicate that effective team interaction and shared cognition play an important role in human-robot teaming performance. Acknowledgements. Human-Robot Dyad research was partially supported by ONR Grant N0014-13-1-0519 to PI: Subbarao Kambhampati (Program Managers: Marc Steinberg). We also acknowledge assistance of data collection Aaron Bradbury, Emily Gran, Jocelyn Martinez, and Madeline Niichel.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Proceedings of the Human Factors and Ergonomics Society Annual Meeting
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.