Abstract

A central process in many kinds of learning is the process of generalization or concept learning from a set of training instances (a set of examples and counterexamples) in the presence of some Background Knowledge. Given a set of examples and counterexamples of a concept, the learner induces a general concept that describes all of the positive examples and none of the counterexamples, and is consistent with the Background Knowledge. We present here a logical framework to induce concept descriptions from a given set of examples and counterexamples in the presence of Background Knowledge described by a set of Horn clauses. In particular, we provide: 1) a definition of what is meant by “a learnable concept” from examples and counterexamples in the presence of Background Knowledge described by a set H of Horn clauses”. Broadly speaking, in our framework, a learnable concept C is an atom which is true (valid) in the least Herbrand model of H but false in some other models of H and, 2) a methodology to induce general concept descriptions from concept examples and counterexamples in Horn theories. We give an automatic method, based on a well-founded ordering on the elements of H, which is the basis for checking validity in the least Herbrand model of H. This work generalizes, unifies, and provides a logical framework of previous studies on the concept-learning-from-examples paradigm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call