Abstract

This chapter discusses the usefulness of new theories of uncertainty for the purpose of modeling some facets of uncertain knowledge, especially vagueness, in artificial intelligence . It can be viewed as a partial reply to Cheeseman's defense of probability. In spite of the growing bulk of works dealing with deviant models of uncertainty in artificial intelligence, there is a strong reaction of classical probability tenants claiming that new uncertainty theories are at best unnecessary and at worst misleading. Interestingly enough, however, the trend to go beyond probabilistic models of subjective uncertainty is emerging even in the orthodox field of decision theory to account for the systematic deviations of human behavior from the expected utility models. The chapter presents the points of view of probability theory and those of two presently popular alternative settings: possibility theory and the theory of evidence. It discusses why probability measures cannot account for all the facets of uncertainty, especially partial ignorance, imprecision, vagueness, and how the other theories can do the job without rejecting the laws of probability when they apply.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call