Abstract

There is an ongoing philosophical and scientific debate concerning the nature of computational explanation in the neurosciences. Recently, some have cited modeling work involving so-called canonical neural computations—standard computational modules that apply the same fundamental operations across multiple brain areas—as evidence that computational neuroscientists sometimes employ a distinctive explanatory scheme from that of mechanistic explanation. Because these neural computations can rely on diverse circuits and mechanisms, modeling the underlying mechanisms is supposed to be of limited explanatory value. I argue that these conclusions about computational explanations in neuroscience are mistaken, and rest upon a number of confusions about the proper scope of mechanistic explanation and the relevance of multiple realizability considerations. Once these confusions are resolved, the mechanistic character of computational explanations can once again be appreciated.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call