Abstract

A general discussion of various levels of models in computational neuroscience is presented. A detailed case study of modeling at the sub-cellular level is undertaken. The process of learning actions by reward or punishment is called 'Instrumental Conditioning' or 'Reinforcement Learning' (RL). Temporal difference learning (TDL) is a mathematical framework for RL. Houk et al. (1995) proposed a cellular signaling model for interaction of dopamine (DA) and glutamate activities at the striatum that forms the basis for TDL. In the model, glutamatergic input generates a membrane depolarization through N-methyl-d-aspartate (NMDA), alpha-amino-5-hydroxy-3-methyl-4-isoxazole propionic acid (AMPA), metabotropic glutamate receptors (mGluR), and opens calcium two plus ion (Ca(2+)) channels resulting in the influx of Ca(2+) into the dendritic spine. This raises the postsynaptic calcium concentration in the dendritic spine leading to the autophosphorylation of calcium/calmodulin-dependent protein kinase II (CaMKII). The timely arrival of the DA input at the neck of the spine head generates a cascade of reactions which then leads to the prolongation of long-term potentiation (LTP) generated by the autophosphorylation of CaMKII. Since no simulations were done so far to support this proposal, we undertook the task of computational verification of the model. During the simulations it was found that there was enhancement and prolongation of autophosphorylation of CaMKII. This result verifies Houk's proposal for LTP in the striatum. Our simulation results are generally in line with the known biological experimental data and also suggest predictions for future experimental verification.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call