Abstract
In bilingual word recognition, cross-language activation has been found in unimodal bilinguals (e.g., Chinese-English bilinguals) and bimodal bilinguals (e.g., American Sign language-English bilinguals). However, it remains unclear how signs' phonological parameters, spoken words' orthographic and phonological representation, and language proficiency affect cross-language activation in bimodal bilinguals. To resolve the issues, we recruited deaf Chinese sign language (CSL)-Chinese bimodal bilinguals as participants. We conducted two experiments with the implicit priming paradigm and the semantic relatedness decision task. Experiment 1 first showed cross-language activation from Chinese to CSL, and the CSL words' phonological parameter affected the cross-language activation. Experiment 2 further revealed inverse cross-language activation from CSL to Chinese. The Chinese words' orthographic and phonological representation played a similar role in the cross-language activation. Moreover, a comparison between Experiments 1 and 2 indicated that language proficiency influenced cross-language activation. The findings were further discussed with the Bilingual Interactive Activation Plus (BIA+) model, the deaf BIA+ model, and the Bilingual Language Interaction Network for Comprehension of Speech (BLINCS) model.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.