Computational Neuroscience is a still nascent field, aspiring to imitate the great success of theoretical physics. The crucial advance of theoretical physics that transformed our understanding of the material world around us beyond recognition was the demonstration that various physical processes, no matter how complicated and rich they may appear to an observer, are governed by a well-defined set of simple principles that are formulated with the precise language of mathematical equations. Are there simple mathematical laws that capture the fundamental laws that govern the operation of neural systems? Will we one day understand the brain to a similar degree as we now understand the material world? If so, we will be able to quantitatively predict complicated cognitive behaviours under arbitrary external and internal conditions and build good brain emulators, very much as we can predict the trajectory of an asteroid or design a bridge with a given safety level. A mere acceptance of such a possibility should not be taken for granted however, as it might put us in a shaky position with respect to some deeply entrenched (and hotly debated) concepts, such as e.g. free will and related issues. This simple consideration illustrates in my view that even the ultimate goals of computational neuroscience should continually be debated and refined. Theoretical studies of the brain are pursued along several disjoint scientific approaches. We have rather accurate mathematical models of ion channels, neurons and synapses, at the level approaching physics. These are formulated with the language of dynamical systems theory, and have reasonable predictive capacity. Extending this approach to larger systems, giving rise to neural network theory, resulted in some profound and imaginative ideas about such complex cognitive phenomena as learning and memory, object recognition and spatial navigation. Predictive capacity of neural network models is however much more limited, and there are still profound uncertainties about their validity, explained among other things by still inadequate experimental techniques of measuring activity of large neuronal ensembles and anatomy of neuronal interactions. Moreover, neural networks models still have a rather poor repertoire of cognitive behaviours that they were applied to. Another approach is to view the brain as a biological implementation of a complex computer, and correspondingly to study it with the tools of computer science. In a sense, a researcher tries to formulate a certain computational task that the brain is performing, and then tries to imagine which algorithm he/she would use to perform the task on a computer. Neuronal architectures may be then used to put certain constrains on the possible algorithm, depending on the particular problem in question. Some examples of a successful application of this approach are the reinforcement theory of learning, computer vision and Bayesian inference framework for the information processing in the brain. Despite of significant achievements, this approach still appears to be largely detached from more biologically inspired modelling. Yet another direction could be called ‘data-driven approach’, who's proponents are mainly concerned with developing sophisticated techniques for analyzing multidimensional data that is generated by various experimental tools such as optical imaging and fMRI. What are the goals that computational neuroscience can realistically achieve in the observable future? I believe the first challenge is to see how far the neural networks theory can be pushed to get close to explaining high-level brain functions. Any cognitive process, vision as an example, involves multi-stage stream of processing with progressive specialized brain areas having similar overall architecture but different aspects of information representation and processing. Neural networks for the most part were confined to functionally uniform neuronal populations. The second challenge that computational neuroscience will have to meet in order to make a progress will be to partially break with the tradition of trying to explain all the richness of the brain with networks of simple elements and complex architectures. This tradition was probably a necessary first step that allowed the researchers to set the stage for the new field and borrow the techniques and intuitions developed in physics, but it resulted in a certain degree of ignorance about the incredible complexity of the neuronal hardware in the brain. It appears reasonable to predict that the full-fledged neuronal network theory will have to seriously consider the effects of neuromodulation, neuronal and synaptic adaptation on various spatial and time scales, incredible diversity of neuron types, the role of glia cells, etc etc etc. Finally, different theoretical approaches mentioned above will have to be more closely integrated into a unified theoretical framework. Thus, theoreticians who come from different scientific traditions not only should make bridges to their experimental colleagues, they should also talk much more to each other, which sometimes, paradoxically, appears to be even more challenging.
Read full abstract