Abstract
There have been recent disagreements in the philosophy of neuroscience regarding which sorts of scientific models provide mechanistic explanations, and which do not (e.g. computational models, dynamical models, topological models). These disagreements often hinge on two commonly adopted, but conflicting, ways of understanding mechanistic explanations: what I call the “representation-as” account, and the “representation-of” account. In this paper, I argue that neither account does justice to neuroscientific practice. In their place, I offer a new alternative that can defuse some of these disagreements. I argue that individual models do not provide mechanistic explanations by themselves (regardless of what type of model they are). Instead, individual models are always used to complement a huge body of background information and pre-existing models about the target system. With this in mind, I argue that mechanistic explanations are distributed across sets of different, and sometimes contradictory, scientific models. Each of these models contributes limited, but essential, information to the same mechanistic explanation, but none can be considered a mechanistic explanation in isolation of the others.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.