Abstract

Decision theory usually is partitioned according to whether the decision is made under conditions of (a) certainty, (b) risk, or (c) uncertainty. These areas are defined as follows: (a) Certainty if each action taken by the decision maker is known to lead invariably to a specific outcome. (b) Risk if each action leads to one of a set of possible unknown outcomes, but each outcome occurs with a known probability distribution. (c) Uncertainty if each action leads to one of a set of possible outcomes, but the probability of a particular outcome is not known to the decision maker. Luce and Raiffa (11, p. 13)1 suggest that we add a fourth classification (d), a combination of 411,k and uncertainty in the light of experimental evidence--the area of statistical inference.2 Decision making in the realm of certainty poses no particular problems since each action has a single-valued or known outcome. The decision maker simply selects the action with the most favorable outcome. However, decision problems under risk and uncertainty have several possible outcomes associated with each action. A set of decision rules, consistent with the decision maker's objective (utility) function, is needed to select the course of action that maximizes utility. This paper presents one method of developing decision rules when the outcome of alternative actions cannot be specified with certainty. The model presented is applicable to a wide range of decision problems (1, 2, 5 6).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call