Abstract

Statistical Relational Learning (SRL) is a growing field in Machine Learning that aims at the integration of logic-based learning approaches with probabilistic graphical models. Markov Logic Networks (MLNs) are one of the state-of-the-art SRL models that combine first-order logic and Markov networks (MNs) by attaching weights to first-order formulas and viewing these as templates for features of MNs. Learning models in SRL consists in learning the structure (logical clauses in MLNs) and the parameters (weights for each clause in MLNs). Structure learning of MLNs is performed by maximizing a likelihood function over relational databases and MLNs have been successfully applied to problems in relational and uncertain domains. Theory revision is the process of refining an existing theory by generalizing or specializing it depending on the nature of the new evidence. If the positive evidence is not explained then the theory must be generalized, whereas if the negative evidence is explained the theory must be specialized in order to exclude the negative example. Current SRL systems do not revise an existing model but learn structure and parameters from scratch. In this paper we propose a novel refining algorithm for theory revision under the statistical logical framework of MLNs. The novelty of the proposed approach consists in a tight integration of structure and parameter learning of an SRL model in a single step inside which a specialization or generalization step is performed for theory refinement.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call