Abstract
In any learning process, the learners arrive with a great deal of variables, such as their different learning styles, their affective states and their previous knowledge, among many others. In most cases, their previous knowledge is incomplete or it comes with a certain degree of uncertainty. Possibilistic Logic was developed as an approach to automated reasoning from uncertain or prioritized incomplete information. The standard possibilistic expressions are classical logic formulas associated with weights. Logic Programming is a very important tool in Artificial Intelligence. Safe beliefs were introduced as an extension of answer sets to study properties and notions of answer sets and Logic Programming from a more general point of view. The stable model semantics is a declarative semantics for logic programs with default negation. In [1], the authors present possibilistic safe beliefs. In [2], the authors introduce possibilistic stable models. In this paper we show an application of possibilistic stable models to a learning situation. Our main result is that possibilistic stable models of possibilistic normal programs are also possibilistic safe beliefs of such programs.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.