Abstract

Probabilistic logic programming (PLP) combines logic programs and probabilities. Due to its expressiveness and simplicity, it has been considered as a powerful tool for learning and reasoning in relational domains characterized by uncertainty. Still, learning the parameter and the structure of general PLP is computationally expensive due to the inference cost. We have recently proposed a restriction of the general PLP language called hierarchical PLP (HPLP) in which clauses and predicates are hierarchically organized. HPLPs can be converted into arithmetic circuits or deep neural networks and inference is much cheaper than for general PLP. In this paper we present algorithms for learning both the parameters and the structure of HPLPs from data. We first present an algorithm, called parameter learning for hierarchical probabilistic logic programs (PHIL) which performs parameter estimation of HPLPs using gradient descent and expectation maximization. We also propose structure learning of hierarchical probabilistic logic programming (SLEAHP), that learns both the structure and the parameters of HPLPs from data. Experiments were performed comparing PHIL and SLEAHP with PLP and Markov Logic Networks state-of-the art systems for parameter and structure learning respectively. PHIL was compared with EMBLEM, ProbLog2 and Tuffy and SLEAHP with SLIPCOVER, PROBFOIL+, MLB-BC, MLN-BT and RDN-B. The experiments on five well known datasets show that our algorithms achieve similar and often better accuracies but in a shorter time.

Highlights

  • Probabilistic logic programs (PLPs) extend logic programs (LPs) with probabilities (Riguzzi, 2018)

  • In Nguembang Fadja et al (2017) we proposed a new language called hierarchical probabilistic logic programming (HPLP) which is a restriction of the language of logic programs with annotated disjunctions (Vennekens et al, 2004) in which clauses and predicates are hierarchically organized

  • expectation maximization PHIL (EMPHIL), that learns the parameters of hierarchical PLP (HPLP) by applying expectation maximization (EM)

Read more

Summary

Introduction

Probabilistic logic programs (PLPs) extend logic programs (LPs) with probabilities (Riguzzi, 2018). HPLPs can be translated in an efficient way into arithmetic circuits (ACs) from which computing the probability of queries is linear in the number of nodes of the circuit. This makes inference and learning in HPLPs faster than for general PLPs. Before describing probabilistic logic programming (PLP) and hierarchical PLP (HPLP) let us define some basic concepts of first order logic (FOL) and logic programming (LP). An expression (literal, term or formula) is ground if it does not contain any variable. A clause with exactly one positive literal is called definite clause

Objectives
Methods
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call