Abstract

Deep neural networks (DNNs) have attained human-level performance on dozens of challenging tasks via an end-to-end deep learning strategy. Deep learning allows data representations that have multiple levels of abstraction; however, it does not explicitly provide any insights into the internal operations of DNNs. Deep learning's success is appealing to neuroscientists not only as a method for applying DNNs to model biological neural systems but also as a means of adopting concepts and methods from cognitive neuroscience to understand the internal representations of DNNs. Although general deep learning frameworks, such as PyTorch and TensorFlow, could be used to allow such cross-disciplinary investigations, the use of these frameworks typically requires high-level programming expertise and comprehensive mathematical knowledge. A toolbox specifically designed as a mechanism for cognitive neuroscientists to map both DNNs and brains is urgently needed. Here, we present DNNBrain, a Python-based toolbox designed for exploring the internal representations of DNNs as well as brains. Through the integration of DNN software packages and well-established brain imaging tools, DNNBrain provides application programming and command line interfaces for a variety of research scenarios. These include extracting DNN activation, probing and visualizing DNN representations, and mapping DNN representations onto the brain. We expect that our toolbox will accelerate scientific research by both applying DNNs to model biological neural systems and utilizing paradigms of cognitive neuroscience to unveil the black box of DNNs.

Highlights

  • Over the past decade, artificial intelligence (AI) has been able to make dramatic advances because of the rise of deep learning (DL) techniques

  • DNNBrain integrates well-established Deep neural networks (DNNs) software and brain imaging packages to enable researchers to conveniently map the representations of DNNs and brains, and examine their correspondences

  • DNNBrain, as a toolbox that is tailored toward mapping the representations of DNNs and brains, has good potential to accelerate the merge of these two trends

Read more

Summary

Introduction

Artificial intelligence (AI) has been able to make dramatic advances because of the rise of deep learning (DL) techniques. DL is able to automatically discover multiple levels of representations that are needed for a given task (LeCun et al, 2015; Goodfellow et al, 2016). With this built-in architecture and learning from large external datasets, DCNNs have achieved human-level performance on a variety of challenging object (Krizhevsky et al, 2012; Simonyan and Zisserman, 2015; Szegedy et al, 2015; He et al, 2016) and speech recognition tasks (Hinton et al, 2012; Sainath et al, 2013; Hannun et al, 2014)

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call