Abstract
Over the last decade, deep neural networks (DNNs) have transformed the state of the art in artificial intelligence. In domains such as language production and reasoning, long considered uniquely human abilities, contemporary models have proven capable of strikingly human-like performance. However, in contrast to classical symbolic models, neural networks can be inscrutable even to their designers, making it unclear what significance, if any, they have for theories of human cognition. Two extreme reactions are common. Neural network enthusiasts argue that, because the inner workings of DNNs do not seem to resemble any of the traditional constructs of psychological or linguistic theory, their success renders these theories obsolete and motivates a radical paradigm shift. Neural network skeptics instead take this inability to interpret DNNs in psychological terms to mean that their success is irrelevant to psychological science. In this article, we review recent work that suggests that the internal mechanisms of DNNs can, in fact, be interpreted in the functional terms characteristic of psychological explanations. We argue that this undermines the shared assumption of both extremes and opens the door for DNNs to inform theories of cognition and its development.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.