Abstract

Concept learning from examples in first-order languages has been widely studied recently. Specifically, many systems that integrate inductive learning and explanation-based learning have been proposed. However, concept learning is only a subproblem of the problem of knowledge base (which is referred to as a theory in first-order logic) revision. This is mainly because concept learning methods utilize background knowledge without regard to whether the knowledge is perfect or imperfect, while a knowledge base revision system should revise the knowledge base to be consistent with all environmental changes. This paper presents a real theory revision method that guarantees the revised theory to be correct on all given examples. As a consequence, the problem of concept learning is also solved through the proposed method. The proposed method is a two-phase approach. In the first phase, the input theory is generalized to cover all positive examples. Theoretically, this can be done by existing concept learning systems. The second phase then specializes the theory to exclude negative examples. The specialization must not cause excessive exclusion of covered positive examples. The paper focuses on designing such a specialization algorithm. Experimental results show that the proposed algorithm can mitigate the over-specialization problem.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call