Abstract

The field of Probabilistic Logic Programming (PLP) has seen significant advances in the last 20 years, with many proposals for languages that combine probability with logic programming. Since the start, the problem of learning probabilistic logic programs has been the focus of much attention. Learning these programs represents a whole subfield of Inductive Logic Programming (ILP). In Probabilistic ILP (PILP) two problems are considered: learning the parameters of a program given the structure (the rules) and learning both the structure and the parameters. Usually structure learning systems use parameter learning as a subroutine. In this article we present an overview of PILP and discuss the main results.

Highlights

  • Probabilistic Logic Programming (PLP) started in the early 90s with seminal works such as those of Dantsin (1991), Ng and Subrahmanian (1992), Poole (1993), and Sato (1995).Since the field has steadily developed and many proposals for the integration of logic programming and probability have appeared, allowing the representation of both complex relations among entities and uncertainty over them

  • The field has steadily developed and many proposals for the integration of logic programming and probability have appeared, allowing the representation of both complex relations among entities and uncertainty over them. These proposals can be grouped into two classes: those that use a variant of the distribution semantics (Sato, 1995) and those that follow a Knowledge Base Model Construction (KBMC) approach (Wellman et al, 1992; Bacchus, 1993)

  • FOR FUTURE WORK PLP can be framed into the broader area of Probabilistic Programming (PP), which is receiving an increasing attention especially in the field of Machine Learning, as is testified by the ongoing DARPA project “Probabilistic Programming for Advancing Machine Learning.”

Read more

Summary

INTRODUCTION

Probabilistic Logic Programming (PLP) started in the early 90s with seminal works such as those of Dantsin (1991), Ng and Subrahmanian (1992), Poole (1993), and Sato (1995). The field has steadily developed and many proposals for the integration of logic programming and probability have appeared, allowing the representation of both complex relations among entities and uncertainty over them These proposals can be grouped into two classes: those that use a variant of the distribution semantics (Sato, 1995) and those that follow a Knowledge Base Model Construction (KBMC) approach (Wellman et al, 1992; Bacchus, 1993). The languages following a KBMC approach include Relational Bayesian Network (Jaeger, 1998), CLP(BN) (Santos Costa et al, 2003), Bayesian Logic Programs (Kersting and De Raedt, 2001), and the Prolog Factor Language (Gomes and Santos Costa, 2012) In these languages, a program is a template for generating a ground graphical model, be it a Bayesian network or a Markov network. We present an updated overview of PILP by concentrating on languages under the distribution semantics

LANGUAGES UNDER THE DISTRIBUTION SEMANTICS
LEARNING The problem that PILP aims at solving can be expressed as:
DISCUSSION AND DIRECTIONS
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.