Abstract

Much of the controversy evoked by the use of deep neural networks as models of biological neural systems amount to debates over what constitutes scientific progress in neuroscience. To discuss what constitutes scientific progress, one must have a goal in mind (progress toward what?). One such long-term goal is to produce scientific explanations of intelligent capacities (e.g., object recognition, relational reasoning). I argue that the most pressing philosophical questions at the intersection of neuroscience and artificial intelligence are ultimately concerned with defining the phenomena to be explained and with what constitute valid explanations of such phenomena. I propose that a foundation in the philosophy of scientific explanation and understanding can scaffold future discussions about how an integrated science of intelligence might progress. Toward this vision, I review relevant theories of scientific explanation and discuss strategies for unifying the scientific goals of neuroscience and AI.

Highlights

  • I completed my graduate studies in cognitive computa- works trained to recognize objects in images and brain tional neuroscience1 during the deep learning revolution in regions along the ventral stream of the primate visual system Fn1 machine learning and what is being called the artificial [4,5,6,7,8,9]

  • As deep learning approaches sensory systems [20,21,22,23,24]. These works are part of a much continued to demonstrate their ability to perform a variety broader alignment developing between the fields of neuroof computer vision, machine hearing, and language process- science and artificial intelligence (AI) which includes work on cognitively inspired ing tasks, several neuroscientists came to the same conclu- AI, biologically plausible learning algorithms, neural dynamsion: that these machine learning systems had potential to ics, representational geometry, and decision making [25,26,27,28,29,30,31,32,33]

  • Many neuroscientists may be compelled by these fundamental questions about how to explain neural and cognitive phenomena and be unaware that there exists a whole subfield of philosophy of science dedicated to the philosophy of neuroscience, in which detailed models have been proposed to account for scientific explanation in neuroscience

Read more

Summary

Introduction

Empirical work inspired by this potential characterized a correspondence between convolutional neural net-I completed my graduate studies in cognitive computa- works trained to recognize objects in images and brain tional neuroscience1 during the deep learning revolution in regions along the ventral stream of the primate visual system Fn1 machine learning and what is being called the artificial [4,5,6,7,8,9]. Many neuroscientists may be compelled by these fundamental questions about how to explain neural and cognitive phenomena and be unaware that there exists a whole subfield of philosophy of science dedicated to the philosophy of neuroscience, in which detailed models have been proposed to account for scientific explanation in neuroscience.

Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.