Abstract

Humans are well-known for being adept at using their intuition and expertise in many situations. However, in some settings even human experts are susceptible to errors in judgement, and a failure to recognize the limits of knowledge. This happens often especially in semi-structured situtations, where multi-disciplinary expertise is required, or when uncertainty is a factor. At these times our natural ability to recognize and correct errors fails us, since we have faith in our reasoning. One way to deal with such problems is to have a computerized “critic” to assist in the process. This article introduces the concept of automated critics that collaborate with human experts to help improve their problem solving performance. A critic is a narrowly focused program that uses a knowledge base to help it recognize (1) what types of human error have occurred, and (2) what kinds of criticism strategies could help the user prevent or eliminate those errors. In discussing the “errors” half of this knowledge base, there is a difference between the expert's knowledge and his or her judgement. The focus in this article is more on judgement than on knowledge but both are addressed. To build automated critics it is important to understand the use and behavior of human critics. For this reason critic theory, principles and rules for design are described in this article. These are presented by showing various types of criticism encountered across a variety of generic tasks, such as medical diagnosis, coaching forecasting and authoring among many others. Thus a model of expert cognition and rules for identifying cognitive biases are presented. This rule base exploits four decades of literature on the psychology of judgement and decisionmaking as a generative theory of “bugs” in expert intuition and as a deep knowledge from which rules about buggy behavior are drawn. For the commonly recurring expert errors, specific preventive and corrective strategies are also reviewed and considerations for criticism presentation and deployment are explained. Particular attention is given to rules about when and how criticism should be offered. By consulting and attempting to operationalize the judgement and decisionmaking literature within the critiquing approach, this establishes criticism-based problem solving as a novel way to bridge the gap between the traditional domain knowledge-rich approaches of AI and the domain-independent, theory-rich approaches of decision analysis. Attention is also devoted to the obstacles to, and opportunities for, further bridging this gap.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call