A feature of the brains of intelligent animals is the ability to learn to respond to an ensemble of active neuronal inputs with a behaviorally appropriate ensemble of active neuronal outputs. Previously, a hypothesis was proposed on how this mechanism is implemented at the cellular level within the neocortical pyramidal neuron: the apical tuft or perisomatic inputs initiate "guess" neuron firings, while the basal dendrites identify input patterns based on excited synaptic clusters, with the cluster excitation strength adjusted based on reward feedback. This simple mechanism allows neurons to learn to classify their inputs in a surprisingly intelligent manner. Here, we revise and extend this hypothesis. We modify synaptic plasticity rules to align with behavioral time scale synaptic plasticity (BTSP) observed in hippocampal area CA1, making the framework more biophysically and behaviorally plausible. The neurons for the guess firings are selected in a voluntary manner via feedback connections to apical tufts in the neocortical layer 1, leading to dendritic Ca2+ spikes with burst firing, which are postulated to be neural correlates of attentional, aware processing. Once learned, the neuronal input classification is executed without voluntary or conscious control, enabling hierarchical incremental learning of classifications that is effective in our inherently classifiable world. In addition to voluntary, we propose that pyramidal neuron burst firing can be involuntary, also initiated via apical tuft inputs, drawing attention toward important cues such as novelty and noxious stimuli. We classify the excitations of neocortical pyramidal neurons into four categories based on their excitation pathway: attentional versus automatic and voluntary/acquired versus involuntary. Additionally, we hypothesize that dendrites within pyramidal neuron minicolumn bundles are coupled via depolarization cross-induction, enabling minicolumn functions such as the creation of powerful hierarchical "hyperneurons" and the internal representation of the external world. We suggest building blocks to extend the microcircuit theory to network-level processing, which, interestingly, yields variants resembling the artificial neural networks currently in use. On a more speculative note, we conjecture that principles of intelligence in universes governed by certain types of physical laws might resemble ours.
Read full abstract