Abstract

Visual short-term memory (VSTM) is a cognitive structure that temporarily maintains a limited amount of visual information in the service of current cognitive goals. There is active theoretical debate regarding how limits in VSTM should be construed. According to discrete-slot models of capacity, these limits are set in terms of a discrete number of slots that store individual objects in an all-or-none fashion. According to alternative continuous resource models, the limits of VSTM are set in terms of a resource that can be distributed to bolster some representations over others in a graded fashion. Hybrid models have also been proposed. We tackled the classic question of how to construe VSTM structure in a novel way, by examining how contending models explain data within traditional VSTM tasks and also how they generalize across different VSTM tasks. Specifically, we fit theoretical ROCs derived from a suite of models to two popular VSTM tasks: a change detection task in which participants had to remember simple features and a rapid serial visual presentation task in which participants had to remember real-world objects. In 3 experiments we assessed the fit and predictive ability of each model and found consistent support for pure resource models of VSTM. To gain a fuller understanding of the nature of limits in VSTM, we also evaluated the ability of these models to jointly model the two tasks. These joint modeling analyses revealed additional support for pure continuous-resource models, but also evidence that performance across the two tasks cannot be captured by a common set of parameters. We provide an interpretation of these signal detection models that align with the idea that differences among memoranda and across encoding conditions alter the memory signal of representations in VSTM.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call