Abstract

Humans working with autonomous artificially intelligent systems may not be experts in the inner workings of their machine teammates, but need to understand when to employ, trust, and rely on the system. A critical challenge is to develop machine agents with the capacity to understand their own capabilities and limitations, and the ability to communicate this information to human partners. Self-assessment is an emerging field that tackles this challenge through the development of algorithms that enable autonomous agents to understand and communicate their competency. These methods can engender appropriate trust and align human expectations with autonomous assistant abilities. However, current research in self-assessment is dispersed across many fields, including artificial intelligence, robotics, and human factors. This survey connects work from these disparate areas and reviews state-of-the-art methods for algorithmic self-assessments that enable autonomous agents to estimate, understand, and communicate valuable information pertaining to their competency, with focus on methods that can improve interactions within human-machine teams. To better understand the landscape of self-assessment approaches, we present a framework for categorizing work in self-assessment based on underlying algorithm type: test-based , learning-based , or knowledge-based . We synthesize common features across these approaches and discuss relevant future directions for research in this emerging space.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call