Abstract

Multiple unmanned system control is a complex command and control endeavor but pairing human operators with an intelligent agent (IA) teammate can buttress the collection and synthesis of data and improve complex decision making. Effective human-autonomy teams (HATs) require human trust in IA teammates to be properly calibrated, which can be supported by communications pertaining to underlying functions of the IA, or “transparency”. One prominent guide for application of transparency is Chen and colleague's Situation awareness-based Agent Transparency (SAT) model. This effort sought to extend understanding of the application of this model by manipulating secondary transparency communication parameters: face threat (i.e., threat to a person's sense of social standing) and design of transparency communication (verbal, graphical, and iconographical). Results revealed that increasing face threat can improve reliance calibration at low transparency but may be detrimental when transparency is high. Outcomes concerning the method of transparency communication suggest that while verbal communication of transparency information is sufficient and even preferred when a low level of transparency is provided, reliance on graphical and iconographical approaches for presenting transparency information increases at a higher level of transparency.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.