We present a Bayesian statistical theory of context learning in the rodent hippocampus. While context is often defined in an experimental setting in relation to specific background cues or task demands, we advance a single, more general notion of context that suffices for a variety of learning phenomena. Specifically, a context is defined as a statistically stationary distribution of experiences, and context learning is defined as the problem of how to form contexts out of groups of experiences that cluster together in time. The challenge of context learning is solving the model selection problem: How many contexts make up the rodent's world? Solving this problem requires balancing two opposing goals: minimize the variability of the distribution of experiences within a context and minimize the likelihood of transitioning between contexts. The theory provides an understanding of why hippocampal place cell remapping sometimes develops gradually over many days of experience and why even consistent landmark differences may need to be relearned after other environmental changes. The theory provides an explanation for progressive performance improvements in serial reversal learning, based on a clear dissociation between the incremental process of context learning and the relatively abrupt context selection process. The impact of partial reinforcement on reversal learning is also addressed. Finally, the theory explains why alternating sequence learning does not consistently result in unique context-dependent sequence representations in hippocampus.
Read full abstract