The neural representation of a stimulus is repeatedly transformed as it moves from the sensory periphery to deeper layers of the nervous system. Sparsening transformations are thought to increase the separation between similar representations, encode stimuli with great specificity, maximize storage capacity of associative memories, and provide an energy efficient instantiation of information in neural circuits. In the insect olfactory system, odors are initially represented in the periphery as a combinatorial code with relatively simple temporal dynamics. Subsequently, in the antennal lobe this representation is transformed into a dense and complex spatiotemporal activity pattern. Next, in the mushroom body Kenyon cells (KCs), the representation is dramatically sparsened. Finally, in mushroom body output neurons (MBONs), the representation takes on a new dense spatiotemporal format. Here, we develop a computational model to simulate this chain of olfactory processing from the receptor neurons to MBONs. We demonstrate that representations of similar odorants are maximally separated, measured by the distance between the corresponding MBON activity vectors, when KC responses are sparse. Sparseness is maintained across variations in odor concentration by adjusting the feedback inhibition that KCs receive from an inhibitory neuron, the Giant GABAergic neuron. Different odor concentrations require different strength and timing of feedback inhibition for optimal processing. Importantly, as observed in vivo, the KC–MBON synapse is highly plastic, and, therefore, changes in synaptic strength after learning can change the balance of excitation and inhibition, potentially leading to changes in the distance between MBON activity vectors of two odorants for the same level of KC population sparseness. Thus, what is an optimal degree of sparseness before odor learning, could be rendered sub–optimal post learning. Here, we show, however, that synaptic weight changes caused by spike timing dependent plasticity increase the distance between the odor representations from the perspective of MBONs. A level of sparseness that was optimal before learning remains optimal post-learning.
Read full abstract