Abstract
Markov Logic Networks (MLNs) represent relational knowledge using a combination of first-order logic and probabilistic models. In this paper, we develop an approach to explain the results of probabilistic inference in MLNs. Unlike approaches such as LIME and SHAP that explain black-box classifiers, explaining M LN inference is harder since the data is interconnected. We develop an explanation framework that computes importance weights for MLN formulas based on their influence on the marginal likelihood. However, it turns out that computing these importance weights exactly is a hard problem and even approximate sampling methods are unreliable when the MLN is large resulting in non-interpretable explanations. Therefore, we develop an approach where we reduce the large MLN into simpler coalitions of formulas that approximately preserve relational dependencies and generate explanations based on these coalitions. We then weight explanations from different coalitions and combine them into a single explanation. Our experiments illustrate that our approach generates more interpretable explanations in several text processing problems as compared to other state-of-the-art methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.