Abstract

In autonomous multiagent or multirobotic systems, the ability to quickly and accurately respond to threats and uncertainties is important for both mission outcomes and survivability. Such systems are never truly autonomous, often operating as part of a human-agent team. Artificial intelligent agents (IAs) have been proposed as tools to help manage such teams; e.g., proposing potential courses of action to human operators. However, they are often underutilized due to a lack of trust . Designing transparent agents, who can convey at least some information regarding their internal reasoning processes, is considered an effective method of increasing trust. How people interact with such transparency information to gain situation awareness while avoiding information overload is currently an unexplored topic. In this article, we go part way to answering this question, by investigating two forms of transparency: sequential transparency , which requires people to step through the IA's explanation in a fixed order; and demand-driven transparency , which allows people to request information as needed. In an experiment using a multivehicle simulation, our results show that demand-driven interaction improves the operators’ trust in the system while maintaining, and at times improving, performance http://www.ieee.org/documents/taxonomy_v101.pdf .?> and usability.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call