Abstract

A significant benefit of computational models is that they mandate the explicit and quantitative account of all variables involved in the scientific problem being investigated. When a simulation is launched, the software program reads a series of parameter values that are assigned by the investigator. In compartmental models of neuronal electrophysiology, these parameters might correspond to membrane capacitances, axial resistances, synaptic conductances, reversal potentials, etc. for various parts of the neuron. In a model of neural network dynamics, parameters might include the interconnectivity, activation threshold, and input/output relations of the nodes or units. Other models will require biophysical parameters (and corresponding values) appropriate for the scale, domain, and scope of investigation. A developmental model of dendritic growth might use filopodia elongation and retraction rates, whereas a second messenger signaling model could instead use kinetic constants for each enzymatic step of the metabotropic cascade. How are parameter values typically selected in computational neuroscience models? In experimentally driven approaches, a key component of the research design consists of an extensive literature search to determine suitable values or appropriate ranges for each parameter of the model. In a minority of cases, a direct measurement is available for a given parameter. More often, other related biophysical observables are reported, which can be employed to indirectly constrain the parameter in question. When even indirect measures are not available, the investigator may opt to leave the “orphan” parameter open. In this scenario, a broad range of unconstrained values is tried in a number of otherwise identical simulations, and the results are monitored to reproduce or match some desired “emergent” property. Open parameter searches are usually considered a last resort because they increase the degrees of freedom in the model, limiting its predictive power and usefulness. Such searches are often time consuming, computationally intensive, and inconclusive. Inferring parameter values from indirect measures encompasses a variety of diverse scenarios depending on how “related” the available empirical data are to the target parameter. This exercise, in fact, constitutes a continuum between the availability of direct measurements and the need for open searches to fit an observable result. In most circumstances, however, parameter value selection represents a critically delicate and labor intensive process in computational neuroscience. It may therefore be useful to develop of a database of quantitative neuroscience measures used as parameter values in computational models. Such an archive would provide an invaluable resource for reuse, while at once crucially documenting the details of parameter value selection. Each entry in the database would consist of a parameter definition and value(s) with associated metadata, including how the value was derived (direct, indirect, fitting), the literature reference(s) reporting the underlying measurements, and the specifics of the model that incorporates the parameter. A database of model parameter values would exemplify a specific application of the notion of data publishing while complementing existing effort linking model sharing with literature mining. Many difficulties would need to be

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.