From sequences of discrete events, humans build mental models of their world. Referred to as graph learning, the process produces a model encoding the graph of event-to-event transition probabilities. Recent evidence suggests that some networks are easier to learn than others, but the neural underpinnings of this effect remain unknown. Here we use fMRI to show that even over short timescales the network structure of a temporal sequence of stimuli determines the fidelity of event representations as well as the dimensionality of the space in which those representations are encoded: when the graph was modular as opposed to lattice-like, BOLD representations in visual areas better predicted trial identity and displayed higher intrinsic dimensionality. Broadly, our study shows that network context influences the strength of learned neural representations, motivating future work in the design, optimization, and adaptation of network contexts for distinct types of learning.
Read full abstract