Abstract

In this paper, we present a case study that explores the nature and development of the mechanisms by which language interacts with and influences our ability to represent and retain information from one of our most important non-linguistic systems – vision. In previous work (Dessalegn & Landau, 2008), we showed that 4year-olds remembered conjunctions of visual features better when the visual target was accompanied by a sentence containing an asymmetric spatial predicate (e.g., the yellow is to the left of the black) but not when the visual target was accompanied by a sentence containing a novel noun (e.g., look at the dax) or a symmetric spatial predicate (e.g., the yellow is touching the black). In this paper, we extend these findings. In three experiments, 3, 4 and 6year-olds were shown square blocks split in half by color vertically, horizontally or diagonally (e.g., yellow-left, black-right) and were asked to perform a delayed-matching task. We found that sentences containing spatial asymmetric predicates (e.g., the yellow is to the left of the black) and non-spatial asymmetric predicates (e.g., the yellow is prettier than the black) helped 4year-olds, although not to the same extent. By contrast, 3year-olds did not benefit from different linguistic instructions at all while 6year-olds performed at ceiling in the task with or without the relevant sentences. Our findings suggest by age 4, the effects of language on non-linguistic tasks depend on highly abstract representations of the linguistic instructions and are momentary, seen only in the context of the task. We further speculate that language becomes more automatically engaged in nonlinguistic tasks over development.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call