Abstract

This paper introduces the process of explanation in computational cognitive science, which involves elucidating why phenomena or events occur within the framework of causal relationships. It highlights the similarity between everyday explanations and scientific explanations when examined through the lens of cognitive processes, noting that scientific explanations are more systematic, rigorous, and precise.
 The three crucial stages of explanation discussed are firstly gathering information to provision the system's structure, secondly generating hypotheses to advance the explanation, and thirdly evaluating competitive hypotheses or explanations. The paper goes on to introduce the computational implementation of cognitive science frameworks for scientific explanation through deductive, probabilistic, and neural network approaches.
 In the deductive approach, the discussion revolves around Hempel's (1965, 1966) explanatory model, illustrating its simulation as a rule-based system when given laws and conditions. The probabilistic approach, primarily utilizing Bayesian networks, is shown to involve tasks such as generating hypotheses and assessing suitable hypotheses through conditional probabilities. Similarly, the neural network theoretical approach demonstrates that hypothesis generation or explanation through identifying causes can occur through the interaction of neural cells.
 Notably, one significant advantage of neural network theory is its ability to perform abductive inference or hypothesis generation not only based on linguistic expressions but also by accepting visual, emotional, and non-linguistic information as inputs and producing non-linguistic outputs. This advantage is ultimately seen as the ability to accommodate a variety of expression forms, termed as a "multi-modal mode."

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call