Abstract

A fundamental question in neuroscience is how structure and function of neural systems are related. We study this interplay by combining a familiar auto-associative neural network with an evolving mechanism for the birth and death of synapses. A feedback loop then arises leading to two qualitatively different types of behaviour. In one, the network structure becomes heterogeneous and dissasortative, and the system displays good memory performance; furthermore, the structure is optimised for the particular memory patterns stored during the process. In the other, the structure remains homogeneous and incapable of pattern retrieval. These findings provide an inspiring picture of brain structure and dynamics that is compatible with experimental results on early brain development, and may help to explain synaptic pruning. Other evolving networks—such as those of protein interactions—might share the basic ingredients for this feedback loop and other questions, and indeed many of their structural features are as predicted by our model.

Highlights

  • A fundamental question in neuroscience is how structure and function of neural systems are related

  • Experimental setups have confirmed that actual neural networks exhibit degree heterogeneity that roughly accords with scale-free distributions, and negative degree–degree correlations, which strongly influence the dynamics of the system[9,10]

  • Network structure dynamics is defined by the probability each node i has to gain or to lose an edge: Pig 1⁄4 uðκÞπðIiÞ; Pil 1⁄4 dðκÞηðIiÞ; ð1Þ

Read more

Summary

Introduction

A fundamental question in neuroscience is how structure and function of neural systems are related. Even though synapses are highly dynamic in time, their overall statistics are preserved over time in the adult brain, indicating that synapse creation and pruning balance each other[23] In this context, models in which networks are gradually formed, for instance by addition of nodes and edges or by rewiring of the latter, have long been studied in different contexts. In the familiar Barabási–Albert model, a node’s probability of receiving a new edge is proportional to its degree[26] These rules often give rise to phase transitions (almost invariably of a continuous nature), such that different kinds of network topology can ensue, depending on parameters[27], and have been used in the past to reproduce some connectivity data on human brain development[28]. Previous studies of co-evolving brain networks have studied the temporal evolution of mean degree[32], particular microscopic mechanisms[20], the development of certain computational capabilities[33], or the effects of specific growth rules[34], and have suggested evidence for the role of bistability and discontinuous transitions in the brain, for instance in synaptic plasticity mechanisms involved in learning[35,36]

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.