Abstract

During the last few decades, a variety of models have been proposed to address causal reasoning (known also as abduction); most of these dealt with a probabilistic or a logical framework. Recently, a few models have been proposed within a neural framework. The investigation of neural approaches is mainly motivated by the computational burden of the causal reasoning task and by the satisfactory results given by neural networks in solving hard problems in general. A particular class of causal reasoning that raises several difficulties is the cancellation class. From an abstract point of view, cancellation occurs when two causes (hypotheses) cancel each other's explanation capabilities with respect to a given effect (observation). The present work is twofold. First, we extend an existing neural model to handle cancellation interactions. Second, we test the model on a large database and propose objective criteria to quantitatively evaluate the scenarios (explanations) produced. Simulation results show good performance and stability of the model.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call