Abstract

Children learning language efficiently process single words, and activate semantic, phonological, and other features of words during recognition. We investigated lexical recognition in deaf children acquiring American Sign Language (ASL) to determine how perceiving language in the visual-spatial modality affects lexical recognition. Twenty native- or early-exposed signing deaf children (ages 4 to 8 years) participated in a visual world eye-tracking study. Children were presented with a single ASL sign, target picture, and three competitor pictures that varied in their phonological and semantic relationship to the target. Children shifted gaze to the target picture shortly after sign offset. Children showed robust evidence for activation of semantic but not phonological features of signs, however in their behavioral responses children were most susceptible to phonological competitors. Results demonstrate that single word recognition in ASL is largely parallel to spoken language recognition among children who are developing a mature lexicon.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.