Abstract

This paper describes a relational learning system LOPSTER (inductive LOgic Programming with Sub-unification of TERms) that learns very efficiently and with small number of examples a class of typical recursive logic programs. Instead of θ-subsumption, usually used in learning systems due to its tractability, we use the notion of generalization based on logical implication. This frees us from some shortcomings of the θ-subsumption, e.g. the requirement to have only a single occurrence of the clause being induced in the proof of the training examples. The system is based on sub-unification: a mechanism that unifies subterms of a term with another term, without the decomposition of the first term. Sub-unification allows us to discover the substitutions performed by recursion, as well as the depth of recursion, before inducing a recursive clause. In the paper, we define two classes of syntactically simple, but at the same time quite common, logic programs: purely recursive and left-recursive programs. LOPSTER is able to induce programs belonging to either class, working from examples with arbitrarily complex terms. Compared to other inductive relational learners LOPSTER does not require large training sets containing structurally similar examples. LOPSTER has been implemented in Prolog. Early experiments reported in the paper show drastic improvements in performance over state-of-the-art relational learning systems.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.