Abstract

Here, we combine network neuroscience and machine learning to reveal connections between the brain’s network structure and the emerging network structure of an artificial neural network. Specifically, we train a shallow, feedforward neural network to classify hand-written digits and then used a combination of systems neuroscience and information-theoretic tools to perform ‘virtual brain analytics’ on the resultant edge weights and activity patterns of each node. We identify three distinct phases of network reconfiguration across learning, each of which are characterized by unique topological and information-theoretic signatures. Each phase involves aligning the connections of the neural network with patterns of information contained in the input dataset or preceding layers (as relevant). We also observe a process of low-dimensional category separation in the network as a function of learning. Our results offer a systems-level perspective of how artificial neural networks function—in terms of multi-stage reorganization of edge weights and activity patterns to effectively exploit the information content of input data during edge-weight training—while simultaneously enriching our understanding of the methods used by systems neuroscience.

Highlights

  • In the human brain, capacities such as cognition, attention, and awareness emerge from the coordinated activity of billions of neurons [1]

  • A clear difference between the topology of the artificial neural networks (ANNs) and standard approaches to analyzing neuroimaging data is that the mean of the absolute value of edge weights from all three groups increased nonlinearly over the course of training in the ANN, whereas typical neuroimaging analyses normalize the strength of weights across cohorts

  • 3 Discussion In this work, we used information theoretic and network science tools to study the topological features of a training neural network that underlie its performance on supervised learning problems

Read more

Summary

Introduction

Capacities such as cognition, attention, and awareness emerge from the coordinated activity of billions of neurons [1]. Even with access to high-resolution images of neural connectivity, we do not yet have access to generative models that can effectively simulate different patterns of network reconfiguration across contexts Without these ‘ground truth’ approaches, systems neuroscience is currently stuck at the descriptive level: we can identify consistent changes in network-level reconfiguration as a function of learning [5, 10], or more abstract cognitive capacities, such as working memory manipulation [16] or dual-task performance [17], we have no principled means of translating these observations to interpretable mechanistic hypotheses [18]

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.