Abstract

In any learning process, the learners arrive with a great deal of variables, such as their different learning styles, their affective states and their previous knowledge, among many others. In most cases, their previous knowledge is incomplete or it comes with a certain degree of uncertainty. Possibilistic Logic was developed as an approach to automated reasoning from uncertain or prioritized incomplete information. The standard possibilistic expressions are classical logic formulas associated with weights. Logic Programming is a very important tool in Artificial Intelligence. Safe beliefs were introduced as an extension of answer sets to study properties and notions of answer sets and Logic Programming from a more general point of view. The stable model semantics is a declarative semantics for logic programs with default negation. In [1], the authors present possibilistic safe beliefs. In [2], the authors introduce possibilistic stable models. In this paper we show an application of possibilistic stable models to a learning situation. Our main result is that possibilistic stable models of possibilistic normal programs are also possibilistic safe beliefs of such programs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call