Abstract

Formalizing aspects of human judgment under uncertainty in support of decision making is a topic central to the fields of artificial intelligence, decision analysis, and psychology. Automated aids based on probability judgments were first proposed in the 1960s [Edwards, 1962; Edwards et al., 1968], however computers were not sufficiently accessible at that time to make the vision practical. By the 1980s techniques in the artificial intelligence community for expert decision support emphasized logical and deterministic models [de Kleer and Williams, 1987; Genesereth, 1984], as well as non-probabilistic methods [Buchanan and Shortliffe, 1984] for handling uncertainty. Over the last ten years advances in graphical models, such as Bayesian networks [Pearl, 1988] and influence diagrams [Howard and Matheson, 1981], have led to a resurgence in the use of decision-theoretic approaches for automated decision making [Heckerman et al., 1992; Breese et al., 1992; Abramson et al., 1996]. Interest in these methods has been motivated by the recognition that heuristic decision making in complex uncertain environments can lead to suboptimal choices [Tversky and Kahneman, 1974; Kleinmuntz, 1985]. Normative systems constructed on the principles of Bayesian probability, multiattribute utility theory, and decision analysis have great potential for automated reasoning and improving human decision making. This paper is a merging of ideas from decision analysis and techniques for probabilistic reasoning developed in the artificial intelligence community. The methods are being applied on a large scale in automated diagnostic procedures for Microsoft software [Heckerman et al., 1995], thus bringing to fruition the notion of normative decision systems envisioned by Ward Edwards in 1962.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call