Abstract
Abstract In this paper, we describe foxPSL , a fast, optimized and extended implementation of Probabilistic Soft Logic (PSL) based on the distributed graph processing framework Signal/Collect . PSL is one of the leading formalisms of statistical relational learning, a recently developed field of machine learning that aims at representing both uncertainty and rich relational structures, usually by combining logical representations with probabilistic graphical models. PSL can be seen as both a probabilistic logic and a template language for hinge-loss Markov Random Fields, a type of continuous Markov Random fields (MRF) in which Maximum a Posteriori inference is very efficient, since it can be formulated as a constrained convex minimization problem, as opposed to a discrete optimization problem for standard MRFs. From the logical perspective, a key feature of PSL is the capability to represent soft truth values, allowing the expression of complex domain knowledge, like degrees of truth, in parallel with uncertainty. foxPSL supports the full PSL pipeline from problem definition to a distributed solver that implements the Alternating Direction Method of Multipliers (ADMM) consensus optimization. It provides a Domain Specific Language that extends standard PSL with a class system and existential quantifiers, allowing for efficient grounding. Moreover, it implements a series of configurable optimizations, like optimized grounding of constraints and lazy inference, that improve grounding and inference time. We perform an extensive evaluation, comparing the performance of foxPSL to a state-of-the-art implementation of ADMM consensus optimization in GraphLab, and show an improvement in both inference time and solution quality. Moreover, we evaluate the impact of the optimizations on the execution time and discuss the trade-offs related to each optimization.
Highlights
Probabilistic Soft Logic (PSL) [1,2,3,4] is one of the leading formalisms of statistical relational learning, a recently developed field of machine learning that aims at representing both uncertainty and rich relational structures, usually by combining logical representations with probabilistic graphical models
Given the continuous nature of the truth values, the use of Lukasiewicz operators, and the restriction of logical formulae to Horn clauses with disjunctive heads, Maximum a Posteriori (MAP) inference in PSL can be formulated as a constrained convex minimization problem
This problem can be cast as a consensus optimization problem [3,4] and solved efficiently with distributed algorithms such as the Alternating Direction Method of Multipliers (ADMM), recently popularized by [6]
Summary
Probabilistic Soft Logic (PSL) [1,2,3,4] is one of the leading formalisms of statistical relational learning, a recently developed field of machine learning that aims at representing both uncertainty and rich relational structures, usually by combining logical representations with probabilistic graphical models. Given the continuous nature of the truth values, the use of Lukasiewicz operators, and the restriction of logical formulae to Horn clauses with disjunctive heads, Maximum a Posteriori (MAP) inference in PSL can be formulated as a constrained convex minimization problem. This problem can be cast as a consensus optimization problem [3,4] and solved efficiently with distributed algorithms such as the Alternating Direction Method of Multipliers (ADMM), recently popularized by [6]. We describe foxPSL and its features, present an empirical evaluation, and conclude with a discussion of future work
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have