The ability to stably maintain visual information over brief delays is central to healthy cognitive functioning, as is the ability to differentiate such internal representations from external inputs. One possible way to achieve both is via multiple concurrent mnemonic representations along the visual hierarchy that differ systematically from the representations of perceptual inputs. To test this possibility, we examine orientation representations along the visual hierarchy during perception and working memory. Human participants directly viewed, or held in mind, oriented grating patterns, and the similarity between fMRI activation patterns for different orientations was calculated throughout retinotopic cortex. During direct viewing of grating stimuli, similarity was relatively evenly distributed amongst all orientations, while during working memory the similarity was higher around oblique orientations. We modeled these differences in representational geometry based on the known distribution of orientation information in the natural world: The "veridical" model uses an efficient coding framework to capture hypothesized representations during visual perception. The "categorical" model assumes that different "psychological distances" between orientations result in orientation categorization relative to cardinal axes. During direct perception, the veridical model explained the data well. During working memory, the categorical model gradually gained explanatory power over the veridical model for increasingly anterior retinotopic regions. Thus, directly viewed images are represented veridically, but once visual information is no longer tethered to the sensory world there is a gradual progression to more categorical mnemonic formats along the visual hierarchy.
Read full abstract