Abstract

A primary motivation for reasoning under uncertainty is to derive decisions in the face of inconclusive evidence. Shafer's theory of belief functions, which explicity represents the underconstrained nature of many reasoning problems, lacks a formal procedure for making decisions. Clearly, when sufficient information is not available, no theory can prescribe actions without making additional assumptions. Faced with this situation, some assumption must be made if a clearly superior choice is to emerge. This paper offers a probabilistic interpretation of a simple assumption that disambiguates decision problems represented with belief functions. It is proved that it yields expected values identical to those obtained by a probabilistic analysis that makes the same assumption. A strict separation is maintained between evidence that carries information about a situation and assumptions that may be made for disambiguation of choices. In addition, it is shown how the decision analysis methodology frequently employed in probabilistic reasoning can be extended for use with belief functions. This generalization of decision analysis allows the use of belief functions within the familiar framework of decision trees.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call