Abstract

Meta-Interpretive Learning (MIL) learns logic programs from examples by instantiating meta-rules. The recent Metagol system efficiently solves MIL-problems by relying on the procedural bias imposed by Prolog. Its focus on positive examples, however, effects that Metagol can detect the derivability of negative examples only at a later check, which can severely hit performance. Viewing MIL-problems as combinatorial search problems, they can alternatively be solved by employing Answer Set Programming (ASP). Using a sophisticated ASP solver, we may expect that violations of negative examples can be propagated directly, but such an effect has never been explicitly exploited for general MIL. In fact, a straightforward ASP-encoding of MIL results in a huge search space due to a lack of procedural bias and the need for grounding. To address these challenging issues, we encode MIL in the HEX formalism, which is an extension of ASP that allows us to outsource the background knowledge, and we restrict the search space by modeling the procedural bias. This way, the import of constants from the background knowledge can for a given type of meta-rules be limited to the relevant ones. Moreover, by abstracting from term manipulations in the encoding and exploiting the HEX interface mechanism, the import of such constants can be prevented completely in order to avoid the grounding bottleneck. An experimental evaluation shows promising results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call