Abstract

Every concept learning system produces hypotheses that are written in some sort of constrained language called the concept description language, and for most learning systems, the concept description language is fixed. This paper describes a learning system that makes a large part of the concept description language an explicit input, and discusses some of the possible applications of providing this additional input. In particular, we discuss a technique for learning a logic program such that the antecedent of each clause in the program can be generated by a special antecedent description language; it is shown that this technique can be used to make use of many different types of background knowledge, including constraints on how predicates can be used, programming clichés, overgeneral theories, incomplete theories, and theories syntactically close to the target theory. The approach thus unifies many of the problems previously studied in the field of knowledge-based learning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call