Abstract
Abstract Our brains operate as a complex network of interconnected neurons. To gain a deeper understanding of this network architecture, it is essential to extract simple rules from its intricate structure. This study aimed to compress and simplify the architecture, with a particular focus on interpreting patterns of functional connectivity in 2.5 hr of electrical activity from a vast number of neurons in acutely sliced mouse brains. Here, we combined two distinct methods together: automatic compression and network analysis. Firstly, for automatic compression, we trained an artificial neural network named NNE (neural network embedding). This allowed us to reduce the connectivity to features, be represented only by 13% of the original neuron count. Secondly, to decipher the topology, we concentrated on the variability among the compressed features and compared them with 15 distinct network metrics. Specifically, we introduced new metrics that had not previously existed, termed as indirect-adjacent degree and neighboring hub ratio. Our results conclusively demonstrated that these new metrics could better explain approximately 40%–45% of the features. This finding highlighted the critical role of NNE in facilitating the development of innovative metrics, because some of the features extracted by NNE were not captured by the currently existed network metrics.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have