Abstract

This article explores the notion of the ‘Gray Box’ to symbolize the idea of providing sufficient information about the learning technology to establish trust. The term system is used throughout this article to represent an intelligent agent, robot, or other form of automation that possesses both decision initiative and authority to act. The article also discusses a proposed and tested Situation Awareness-based Agent Transparency (SAT) model, which posits that users need to understand the system’s perception, comprehension, and projection of a situation. One of the key challenges is that a learning system may adopt behavior that is difficult to understand and challenging to condense to traditional if-then statements. Without a shared semantic space, the system will have little basis for communicating with the human. One of the key recommendations of this article is that there is a need to provide learning systems with transparency as to the state of the human operator, including their momentary capabilities and potential impact of changes in task allocation and teaming approach.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.