Abstract

Although it has been argued that mechanistic explanation is compatible with abstraction (i.e., that there are abstract mechanistic models), there are still doubts about whether mechanism can account for the explanatory power of significant abstract models in computational neuroscience. Chirimuuta has recently claimed that models describing canonical neural computations (CNCs) must be evaluated using a non-mechanistic framework. I defend two claims regarding these models. First, I argue that their prevailing neurocognitive interpretation is mechanistic. Additionally, a criterion recently proposed by Levy and Bechtel to legitimize mechanistic abstract models, and also a criterion proposed by Chirimuuta herself aimed to distinguish between causal and non-causal explanation, can be employed to show why these models are explanatory only under this interpretation (as opposed to a purely mathematical or non-causal interpretation). Second, I argue that mechanism is able to account for the special epistemic achievement implied by CNC models. Canonical neural components contribute to an integrated understanding of different cognitive functions. They make it possible for us to explain these functions by describing different mechanisms constituted by common basic components arranged in different ways.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call