Abstract

The effect of labels on nonlinguistic representations is the focus of substantial theoretical debate in the developmental literature. A recent empirical study demonstrated that ten-month-old infants respond differently to objects for which they know a label relative to unlabeled objects. One account of these results is that infants’ label representations are incorporated into their object representations, such that when the object is seen without its label, a novelty response is elicited. These data are compatible with two recent theories of integrated label-object representations, one of which assumes labels are features of object representations, and one which assumes labels are represented separately, but become closely associated across learning. Here, we implement both of these accounts in an auto-encoder neurocomputational model. Simulation data support an account in which labels are features of objects, with the same representational status as the objects’ visual and haptic characteristics. Then, we use our model to make predictions about the effect of labels on infants’ broader category representations. Overall, we show that the generally accepted link between internal representations and looking times may be more complex than previously thought.

Highlights

  • T HE nature of the relationship between labels and nonlinguistic representations has been the focus of recent theoretical debate in the developmental literature

  • This approach takes a middle ground between the labels-as-symbols and the labels-as-features views in that labels do not act at the same level as other object features, but that an integrated object representation is formed through the association between perceptual object features and labels

  • The current simulations demonstrate that a labels-asfeatures account can explain empirical looking time data from 10-month-old infants pre-trained with one labeled and one unlabeled 3D object

Read more

Summary

INTRODUCTION

T HE nature of the relationship between labels and nonlinguistic representations has been the focus of recent theoretical debate in the developmental literature. Westermann and Mareschal [3] suggested a compound-representations account in which labels are encoded in the same representational space as objects and drive learning over time, but do not function at the same level as other perceptual features. Rather, they become closely integrated with object representations over learning and result in mental representations for objects that reflect both perceptual similarity and whether two objects share the same label or have different labels. Testing the hypothesis that (previously learned) labels would affect infants’ object representations, the authors predicted that infants should exhibit different looking times to the labeled and unlabeled objects. Here we implemented both accounts in simple computational models to explore which of the labels-as-features and compoundrepresentations accounts best explains Twomey and Westermann’s [8] looking time data

Model Architecture
Procedure
Results
Discussion
EXPERIMENT 2
Stimuli
GENERAL DISCUSSION

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.