Abstract

The visual system sometimes fails, partially or completely, to encode and/or retrieve spatial relations among parts of an object. For example, targets can easily be confused with their mirror images, especially when they must be retained in memory. In the current experiments we ask whether our representations of spatial relations can be amended by information from different cognitive domains. Specifically, we ask whether failure to form a stable representation of spatial relations among parts can be overcome by the use of linguistic information. Four year-olds saw squares split by color and matched them after delay. In Experiment 1, children saw the target and were told either “Look, this is a blicket” (Label Condition) or “Look!” (NoLabel Condition). Then, three choices appeared: the target (e.g. vertical split with red left, green right), its mirror image, and another square that had a different internal split (e.g. horizontal). Overall, children performed better than chance. However, their errors were almost exclusively mirror image confusions, suggesting that children failed to bind color and location (e.g. red left, green right). There was no difference between the NoLabel and Label conditions, suggesting the whole-object novel label did not help children form a stable representation of the spatial relation among the parts. Experiment 2 tested whether color–location binding can be improved by providing language that might bind these features. Children were shown a target and were told, e.g. “The red is on the left.” Performance was reliably better than in Experiment 1, suggesting language did help children bind color and location. Experiments 3 and 4 explored whether the same performance improvement could be accomplished by increasing non-linguistic attention to the target (i.e. flashing the red part, Experiment 3) or by using neutral relational language (e.g. “The red is touching the green”). Neither experiment showed enhanced performance, suggesting that language can augment visual–spatial representations only if it conveys very specific information (e.g. direction). Generally, the results suggest that specific linguistic information can help form a stable representation of spatial relationship and that this effect is not attributable to general attentional effects.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call