Abstract

AI systems often need to deal with inconsistencies. One way of getting information about inconsistencies is by measuring the amount of information in the knowledgebase. In the past 20 years numerous inconsistency measures have been proposed. Many of these measures are syntactic measures, that is, they are based in some way on the minimal inconsistent subsets of the knowledgebase. Very little attention has been given to semantic inconsistency measures, that is, ones that are based on the models of the knowledgebase where the notion of a model is generalized to allow an atom to be assigned a truth value that denotes contradiction. In fact, only one nontrivial semantic inconsistency measure, the contension measure, has been in wide use. The purpose of this paper is to define a class of semantic inconsistency measures based on 3-valued logics. First, we show which 3-valued logics are useful for this purpose. Then we show that the class of semantic inconsistency measures can be developed using a graphical framework similar to the way that syntactic inconsistency measures have been studied. We give several examples of semantic inconsistency measures and show how they apply to three useful 3-valued logics. We also investigate the properties of these inconsistency measures and show their computation for several knowledgebases.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call