Abstract
Inductive logic programming (ILP) is a form of logic-based machine learning. The goal is to induce a hypothesis (a logic program) that generalises given training examples and background knowledge. As ILP turns 30, we review the last decade of research. We focus on (i) new meta-level search methods, (ii) techniques for learning recursive programs, (iii) new approaches for predicate invention, and (iv) the use of different technologies. We conclude by discussing current limitations of ILP and directions for future research.
Highlights
Inductive logic programming (ILP) (Muggleton, 1991; Muggleton & De Raedt, 1994) is a form of machine learning (ML)
Background knowledge ILP learns using background knowledge (BK) represented as a logic program
Because hypotheses are symbolic, hypotheses can be added to BK, and ILP systems naturally support lifelong and transfer learning (Lin et al, 2014; Cropper, 2019, 2020)
Summary
Inductive logic programming (ILP) (Muggleton, 1991; Muggleton & De Raedt, 1994) is a form of machine learning (ML). To illustrate ILP1 suppose you want to learn a string transformation program from the following examples: Input inductive logic programming. In ILP, we would represent these examples as logical atoms, such as f([i,n,d,u,c,t,i,v,e], e), where f is the target predicate that we want to learn (the relation to generalise). Given the aforementioned examples and BK, an ILP system could induce the hypothesis (a logic program): Each line of the program is a rule. The first rule says that the relation f(A,B) holds when the three literals tail(A,C), empty(C), and head(A,B) hold. The second rule says that f(A,B) holds when the same relation holds for the tail of A
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have