Abstract

AbstractAI systems must be able to learn, reason logically, and handle uncertainty.While much research has focused on each of these goals individually, only recently have we begun to attempt to achieve all three at once. In this talk, I describe Markov logic, a representation that combines first-order logic and probabilistic graphical models, and algorithms for learning and inference in it. Syntactically, Markov logic is first-order logic augmented with a weight for each formula. Semantically, a set of Markov logic formulas represents a probability distribution over possible worlds, in the form of a Markov network with one feature per grounding of a formula in the set, with the corresponding weight. Formulas are learned from relational databases using inductive logic programming techniques.Weights can be learned either generatively (using pseudo-likelihood optimization) or discriminatively (using a voted perceptron algorithm). Inference is performed by a weighted satisfiability solver or by Markov chain Monte Carlo, operating on the minimal subset of the ground network required for answering the query. Experiments in link prediction, entity resolution and other problems illustrate the promise of this approach.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call