Abstract

Computational neuroscience relies on simulations of neural network models to bridge the gap between the theory of neural networks and the experimentally observed activity dynamics in the brain. The rigorous validation of simulation results against reference data is thus an indispensable part of any simulation workflow. Moreover, the availability of different simulation environments and levels of model description require also validation of model implementations against each other to evaluate their equivalence. Despite rapid advances in the formalized description of models, data, and analysis workflows, there is no accepted consensus regarding the terminology and practical implementation of validation workflows in the context of neural simulations. This situation prevents the generic, unbiased comparison between published models, which is a key element of enhancing reproducibility of computational research in neuroscience. In this study, we argue for the establishment of standardized statistical test metrics that enable the quantitative validation of network models on the level of the population dynamics. Despite the importance of validating the elementary components of a simulation, such as single cell dynamics, building networks from validated building blocks does not entail the validity of the simulation on the network scale. Therefore, we introduce a corresponding set of validation tests and present an example workflow that practically demonstrates the iterative model validation of a spiking neural network model against its reproduction on the SpiNNaker neuromorphic hardware system. We formally implement the workflow using a generic Python library that we introduce for validation tests on neural network activity data. Together with the companion study (Trensch et al., 2018), the work presents a consistent definition, formalization, and implementation of the verification and validation process for neural network simulations.

Highlights

  • Computational neuroscience is driven by the development of models describing neuronal activity on different temporal and spatial scales, ranging from single cells (e.g., Koch and Segev, 2000; Izhikevich, 2004) to spiking activity in mesoscopic neural networks (e.g., Potjans and Diesmann, 2014; Markram et al, 2015), to whole-brain activity (e.g., Sanz Leon et al, 2013; Schmidt et al, 2018)

  • We introduce the concept of network-level validation in computational neuroscience; the validation of a simulation on the basis of measures derived from the collective dynamics exhibited by the model

  • We present the results of the various validation tests of the SpiNNaker implementation against the C simulation of the polychronization model

Read more

Summary

Introduction

Computational neuroscience is driven by the development of models describing neuronal activity on different temporal and spatial scales, ranging from single cells (e.g., Koch and Segev, 2000; Izhikevich, 2004) to spiking activity in mesoscopic neural networks (e.g., Potjans and Diesmann, 2014; Markram et al, 2015), to whole-brain activity (e.g., Sanz Leon et al, 2013; Schmidt et al, 2018). There is no general consensus on how models should be described and delivered (Nordlie et al, 2009), a number of frameworks support researchers in documenting and implementing models beyond the level of custom-written code in standard high-level programming languages These frameworks include guidelines for reproducible network model representations (Nordlie et al, 2009; McDougal et al, 2016), domain-specific model description languages (e.g., Gleeson et al, 2010; Plotnikov et al, 2016), modeling tool-kits (e.g., BMTK3, NetPyNE4), and generic network simulation frameworks (Davison et al, 2008). Data, with the community several databases and repositories have emerged and are commonly used for this purpose, for example GitHub, OpenSourceBrain, the Neocortical Microcircuit Collaboration Portal (Ramaswamy et al, 2015), the G-Node Infrastructure (GIN), ModelDB9, NeuroElectro (Tripathy et al, 2014), or CRCNS 11 (Teeters et al, 2008)

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.