Abstract

Many forms of inductive logic programming (ILP) use metarules, second-order Horn clauses, to define the structure of learnable programs and thus the hypothesis space. Deciding which metarules to use for a given learning task is a major open problem and is a trade-off between efficiency and expressivity: the hypothesis space grows given more metarules, so we wish to use fewer metarules, but if we use too few metarules then we lose expressivity. In this paper, we study whether fragments of metarules can be logically reduced to minimal finite subsets. We consider two traditional forms of logical reduction: subsumption and entailment. We also consider a new reduction technique called derivation reduction, which is based on SLD-resolution. We compute reduced sets of metarules for fragments relevant to ILP and theoretically show whether these reduced sets are reductions for more general infinite fragments. We experimentally compare learning with reduced sets of metarules on three domains: Michalski trains, string transformations, and game rules. In general, derivation reduced sets of metarules outperform subsumption and entailment reduced sets, both in terms of predictive accuracies and learning times.

Highlights

  • Deciding which metarules to use for a given learning task is a major open challenge (Cropper 2017; Cropper and Muggleton 2014) and is a trade-off between efficiency and expressivity: the hypothesis space grows given more metarules (Cropper and Muggleton 2014; Lin et al 2014), so we wish to use fewer metarules, but if we use too few metarules we lose expressivity

  • – We describe the logical reduction problem (Sect. 3). – We describe subsumption and entailment reduction, and introduce derivation reduction, the problem of removing derivationally redundant clauses from a clausal theory (Sect. 3). – We study the decidability of the three reduction problems and show, for instance, that the derivation reduction problem is undecidable for arbitrary Horn theories (Sect. 3). – We introduce two general reduction algorithms that take a reduction relation as a parameter

  • The study of metarules has implications for many inductive logic programming (ILP) approaches (Albarghouthi et al 2017; Campero et al 2018; Cropper and Muggleton 2019; Emde et al 1983; Evans and Grefenstette 2018; Flener 1996; Kaminski et al 2018; Kietz and Wrobel 1992; Muggleton et al 2015; De Raedt and Bruynooghe 1992; Si et al 2018; Wang et al 2014), we focus on meta-interpretive learning (MIL), a form of ILP based on a Prolog meta-interpreter

Read more

Summary

Introduction

To learn the grandparent/2 relation given the parent/2 relation, the chain metarule would be suitable: P(A, B) ← Q(A, C), R(C, B). In this metarule the letters P, Q, and R denote existentially quantified second-order variables (variables that can be bound to predicate symbols) and the letters A, B and C denote universally quantified first-order variables (variables that can be bound to constant symbols). The background parent/2 relation, and examples of the grandparent/2 relation, ILP approaches will try to find suitable substitutions for the existentially quantified second-order variables, such as the substitutions {P/grandparent, Q/parent, R/parent} to induce the theory: grandparent(A, B) ← parent(A, C), parent(C, B). It is impossible to learn the grandparent/2 relation using only metarules with monadic predicates

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call