Abstract
Agent transparency is an important contributor to human performance, situation awareness (SA), and trust in human-agent teaming. However, agent transparency's effects on human performance when the agent is unreliable have yet to be examined. This paper examined how the transparency and reliability of an autonomous robotic squad member (ASM) affected a human observer's task performance, workload, SA, trust in the robot, and perceptions of the robot. In a 2 (ASM transparency) × 2 (ASM reliability) within-subject design experiment, participants monitored a simulated soldier squad that included an ASM as it traversed a simulated training environment, while concurrently monitoring the environment for targets. There was no difference in participants' performance on the target detection task, workload, or SA due to either ASM transparency or reliability. ASM reliability influenced participant trust and perceptions of the robot. Results suggest that reliability may be a stronger influence on the human's perceptions of the robot than transparency. Robot errors had a profound and lasting effect on the participants' perception of the robot's future reliability and resulted in reduced confidence in their assessments of the robot's reliability. These findings could have important implications for the continued use of automated systems when the user is aware of system errors.
Highlights
D EVELOPMENT of autonomous robotic agents for use in military operations is a priority for the U.S military [1]
The findings suggest that access to in-depth agent situation awareness-based agent transparency (SAT) information or reduced agent reliability does not distract participants enough to influence their concurrent task performance
Humans working with autonomous robots in simple, low workload environments may not have the same SAT needs as those in environments that are more dynamic
Summary
D EVELOPMENT of autonomous robotic agents for use in military operations is a priority for the U.S military [1]. As a robotic agent’s autonomy increases, so too does the difficulty its human teammates’ experience in maintaining their awareness and understanding of the robot’s actions. Making the robot convey information to the human that would support a transparent human–robot interaction addresses these issues. In the context of human interaction with automated systems, Manuscript received October 9, 2018; revised February 12, 2019 and May 30, 2019; accepted June 11, 2019. H. Kim) and in part by the US Army Research Laboratory - Human-Robot Interaction program (Manager Dr Susan Hill).
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.