Abstract
In [12], Nilsson proposed the probabilistic logic in which the truth values of logical propositions are probability values between 0 and 1. It is applicable to any logical system for which the consistency of a finite set of propositions can be established. The probabilistic inference scheme reduces to the ordinary logical inference when the probabilities of all propositions are either 0 or 1. This logic has the same limitations of other probabilistic reasoning systems of the Bayesian approach. For common sense reasoning, consistency is not a very natural assumption. We have some well known examples: {Dick is a Quaker, Quakers are pacifists, Republicans are not pacifists, Dick is a Republican}and {Tweety is a bird, birds can fly, Tweety is a penguin}. In this paper, we shall propose some extensions of the probabilistic logic. In the second section, we shall consider the space of all interpretations, consistent or not. In terms of frames of discernment, the basic probability assignment (bpa) and belief function can be defined. Dempster's combination rule is applicable. This extension of probabilistic logic is called the evidential logic in [ 1]. For each proposition s, its belief function is represented by an interval [Spt(s), Pls(s)]. When all such intervals collapse to single points, the evidential logic reduces to probabilistic logic (in the generalized version of not necessarily consistent interpretations). Certainly, we get Nilsson's probabilistic logic by further restricting to consistent interpretations. In the third section, we shall give a probabilistic interpretation of probabilistic logic in terms of multi-dimensional random variables. This interpretation brings the probabilistic logic into the framework of probability theory. Let us consider a finite set S = {sl, s2, ..., Sn) of logical propositions. Each proposition may have true or false values; and may be considered as a random variable. We have a probability distribution for each proposition. The e-dimensional random variable (sl,..., Sn) may take values in the space of all interpretations of 2n binary vectors. We may compute absolute (marginal), conditional and joint probability distributions. It turns out that the permissible probabilistic interpretation vector of Nilsson [12] consists of the joint probabilities of S. Inconsistent interpretations will not appear, by setting their joint probabilities to be zeros. By summing appropriate joint probabilities, we get probabilities of individual propositions or subsets of propositions. Since the Bayes formula and other techniques are valid for e-dimensional random variables, the probabilistic logic is actually very close to the Bayesian inference schemes. In the last section, we shall consider a relaxation scheme for probabilistic logic. In this system, not only new evidences will update the belief measures of a collection of propositions, but also constraint satisfaction among these propositions in the relational network will revise these measures. This mechanism is similar to human reasoning which is an evaluative process converging to the most satisfactory result. The main idea arises from the consistent labeling problem in computer vision. This method is originally applied to scene analysis of line drawings. Later, it is applied to matching, constraint satisfaction and multi sensor fusion by several authors [8], [16] (and see references cited there). Recently, this method is used in knowledge aggregation by Landy and Hummel [9].
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.