Abstract

AbstractAI systems must be able to learn, reason logically, and handle uncertainty. While much research has focused on each of these goals individually, only recently have we begun to attempt to achieve all three at once. In this talk I will describe Markov logic, a representation that combines first-order logic and probabilistic graphical models, and algorithms for learning and inference in it. A knowledge base in Markov logic is a set of weighted first-order formulas, viewed as templates for features of Markov networks. The weights and probabilistic semantics make it easy to combine knowledge from a multitude of noisy, inconsistent sources, reason across imperfectly matched ontologies, etc. Inference in Markov logic is performed by weighted satisfiability testing, Markov chain Monte Carlo, and (where appropriate) specialized engines. Formulas can be refined using inductive logic programming techniques, and weights can be learned either generatively (using pseudo-likelihood) or discriminatively (using a voted perceptron). Markov logic has been successfully applied to problems in entity resolution, social network modeling, information extraction and others, and is the basis of the open-source Alchemy system.(Joint work with Stanley Kok, Hoifung Poon, Matt Richardson and Parag Singla.)

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call