Abstract

This article explores the notion of the ‘Gray Box’ to symbolize the idea of providing sufficient information about the learning technology to establish trust. The term system is used throughout this article to represent an intelligent agent, robot, or other form of automation that possesses both decision initiative and authority to act. The article also discusses a proposed and tested Situation Awareness-based Agent Transparency (SAT) model, which posits that users need to understand the system’s perception, comprehension, and projection of a situation. One of the key challenges is that a learning system may adopt behavior that is difficult to understand and challenging to condense to traditional if-then statements. Without a shared semantic space, the system will have little basis for communicating with the human. One of the key recommendations of this article is that there is a need to provide learning systems with transparency as to the state of the human operator, including their momentary capabilities and potential impact of changes in task allocation and teaming approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call