Abstract

Problem solving and decision making by humans is often done in environments where information concerning the problem is partial or approximate. AI researchers have been attempting to emulate this capability in computer expert systems. Most of the methods used to-date lack a theoretical foundation. Some theories for handling uncertainty of information have been proposed in the recent past. In this paper, we critically review these theories. The main theories that we examine are: Probability Theory, Shafer's Evidence Theory, Zadeh's Possibility Theory, Cohen's Theory of Endorsements and the non-monotonic logics. We describe these in terms of the representation of uncertain information, and combination of bodies of information and inferencing with such information, and consider the strong and weak aspects of each theory.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call