Abstract
Evaluations of advantages of Probabilistic Inductive Logic Programming (PILP) against ILP have not been conducted from a computational learning theory point of view. We propose a PILP framework, projection-based PILP, in which surjective projection functions are used to produce a “lossy” compression dataset from an ILP dataset. We present sample complexity results including conditions when projection-based PILP needs fewer examples than PAC. We experimentally confirm the theoretical bounds for the projection-based PILP in the Blackjack domain using Cellist, a system which machine learns Probabilistic Logic Automata. In our experiments projection-based PILP shows lower predictive error than the theoretical bounds and achieves substantially lower predictive error than ILP. To the authors’ knowledge this is the first paper describing both a computer learning theory and related empirical results on an advantage of PILP against ILP.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.