Multiple unmanned system control is a complex command and control endeavor but pairing human operators with an intelligent agent (IA) teammate can buttress the collection and synthesis of data and improve complex decision making. Effective human-autonomy teams (HATs) require human trust in IA teammates to be properly calibrated, which can be supported by communications pertaining to underlying functions of the IA, or “transparency”. One prominent guide for application of transparency is Chen and colleague's Situation awareness-based Agent Transparency (SAT) model. This effort sought to extend understanding of the application of this model by manipulating secondary transparency communication parameters: face threat (i.e., threat to a person's sense of social standing) and design of transparency communication (verbal, graphical, and iconographical). Results revealed that increasing face threat can improve reliance calibration at low transparency but may be detrimental when transparency is high. Outcomes concerning the method of transparency communication suggest that while verbal communication of transparency information is sufficient and even preferred when a low level of transparency is provided, reliance on graphical and iconographical approaches for presenting transparency information increases at a higher level of transparency.
Read full abstract