Abstract

Though much progress has been made to understand feature integration, debate remains regarding how objects are represented in mind based on their constituent features. Here, we advance this debate by introducing a novel shape-color “conjunction task” to reconstruct memory resolution for multiple object features simultaneously. In a first experiment, we replicate and extend a classic paradigm originally tested using a change detection task. Replicating previous work, memory resolution for individual features was reduced when the number of objects increased, regardless of the number of to-be-remembered features. Extending previous work, we found that high resolution memory near perfect in resemblance to the target was selectively impacted by the number of to-be-remembered features. Applying a data-driven statistical model of stochastic dependence, we found robust evidence of integration for lower-resolution feature memories, but less evidence for integration of high-resolution feature memories. These results suggest that memory resolution for individual features can be higher than memory resolution for their integration. In a second experiment which manipulated the nature of distracting information, we examined whether object features were directly bound to each other or by virtue of shared spatial location. Feature integration was disrupted by distractors sharing visual features of target objects but not when distractors shared spatial location – suggesting that feature integration can be driven by direct binding between shape and color features in memory. Our results constrain theoretical models of object representation, providing empirical support for hierarchical representations of both integrated and independent features.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call