Abstract

Despite the recent success of neural network models in mimicking animal performance on various tasks, critics worry that these models fail to illuminate brain function. We take it that a central approach to explanation in systems neuroscience is that of mechanistic modeling, where understanding the system requires us to characterize its parts, organization, and activities, and how those give rise to behaviors of interest. However, it remains controversial what it takes for a model to be mechanistic, and whether computational models such as neural networks qualify as explanatory on this approach.We argue that certain kinds of neural network models are actually good examples of mechanistic models, when an appropriate notion of mechanistic mapping is deployed. Building on existing work on model-to-mechanism mapping (3M), we describe criteria delineating such a notion, which we call 3M++. These criteria require us, first, to identify an abstract level of description that is still detailed enough to be “runnable”, and then, to construct model-to-brain mappings using the same principles as those employed for brain-to-brain mapping across individuals.Perhaps surprisingly, the abstractions required are just those already in use in experimental neuroscience and deployed in the construction of more familiar computational models – just as the principles of inter-brain mappings are very much in the spirit of those already employed in the collection and analysis of data across animals.In a companion paper, we address the relationship between optimization and intelligibility, in the context of functional evolutionary explanations. Taken together, mechanistic interpretations of computational models and the dependencies between form and function illuminated by optimization processes can help us to understand why brain systems are built they way they are.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call