Abstract

In many applications of Inductive Logic Programming (ILP), learning occurs from a knowledge base that contains a large number of examples. Storing such a knowledge base may consume a lot of memory. Often, there is a substantial overlap of information between different examples. To reduce memory consumption, we propose a method to represent a knowledge base more compactly. We achieve this by introducing a meta-theory able to build new theories out of other (smaller) theories. In this way, the information associated with an example can be built from the information associated with one or more other examples and redundant storage of shared information is avoided. We also discuss algorithms to construct the information associated with example theories and report on a number of experiments evaluating our method in different problem domains.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call