Abstract

Neural-symbolic models provide a powerful tool to tackle complex visual reasoning tasks by combining symbolic program execution for reasoning and deep representation learning for visual recognition. A probabilistic formulation of such models with stochastic latent variables can obtain an interpretable and legible reasoning system with less supervision. However, it is still nontrivial to generate reasonable symbolic structures without the guidance of domain knowledge, since it generally involves an optimization problem with both continuous and discrete variables. Despite the challenges, the interpretability of such symbolic structures provides an interface to regularize their generation by domain knowledge. In this article, we propose to incorporate the available domain knowledge into the learning process of probabilistic neural-symbolic (PNS) models via posterior constraints that directly regularize the structure posterior. In this way, our model is able to identify a middle point where the structure generation process mainly learns from data but also selectively borrows information from domain knowledge. We further present inductive reasoning where the posterior constraints can be automatically reweighted to handle noisy annotations. The experimental results show that our method achieves state-of-the-art performance on major abstract reasoning datasets and enjoys good generalization capability and data efficiency.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call