Abstract

The claim defended in the paper is that the mechanistic account of explanation can easily embrace idealization in big-scale brain simulations, and that only causally relevant detail should be present in explanatory models. The claim is illustrated with two methodologically different models: (1) Blue Brain, used for particular simulations of the cortical column in hybrid models, and (2) Eliasmith’s SPAUN model that is both biologically realistic and able to explain eight different tasks. By drawing on the mechanistic theory of computational explanation, I argue that large-scale simulations require that the explanandum phenomenon is identified; otherwise, the explanatory value of such explanations is difficult to establish, and testing the model empirically by comparing its behavior with the explanandum remains practically impossible. The completeness of the explanation, and hence of the explanatory value of the explanatory model, is to be assessed vis-a-vis the explanandum phenomenon, which is not to be conflated with raw observational data and may be idealized. I argue that idealizations, which include building models of a single phenomenon displayed by multi-functional mechanisms, lumping together multiple factors in a single causal variable, simplifying the causal structure of the mechanisms, and multi-model integration, are indispensable for complex systems such as brains; otherwise, the model may be as complex as the explanandum phenomenon, which would make it prone to so-called Bonini paradox. I conclude by enumerating dimensions of empirical validation of explanatory models according to new mechanism, which are given in a form of a “checklist” for a modeler.

Highlights

  • Computer simulation is an essential tool in neuroscience and serves various purposes

  • The Blue Brain project offers an unprecedented level of detail, describing a part of the somatosensory cortex in 14-day-old rat, and Markram claims that the Blue Brain simulations are meant to “aid our understanding of brain function and dysfunction” (Markram 2006, p. 153)

  • Just like the Blue Brain, most other extant large-scale brain simulations do not aim at modeling intelligent behaviors, which occur at temporal scales of minutes to hours—in part because we do not yet know the intermediate-scale structure of the brain, so we are unable to encode it into simulations (De Garis et al 2010)

Read more

Summary

Introduction

Computer simulation is an essential tool in neuroscience and serves various purposes. The account suggests that to serve their explanatory purposes, brain models in general, and computer simulations in particular, may and should be idealized. Mechanists stress that there is a need to precisely specify the explanandum phenomenon, which decides what is relevant to the explanation, so not just any detail counts, and Kaplan naturally does not claim that causally irrelevant detail is explanatory. I do not agree with Kaplan and other mechanists, about the role of idealization in neuroscience While they allow idealization for practical reasons and because of technological limitations, I think idealization is required in principle in explanations of sufficiently complex mechanisms. Idealizations involve building models of a single phenomenon displayed by multi-functional mechanisms (mechanistic explanatory norms do not require a single model to explain everything), lumping together multiple factors in a single causal variable, simplifying the causal structure of the mechanisms, and multi-model integration. I conclude by enumerating dimensions of empirical validation of explanatory models as a “checklist” for a modeler

Mechanistic account of simulation-based explanation
Blue Brain meets Spaun
Conclusion: evaluating and integrating large-scale simulations
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call