Abstract

SummaryConceptual knowledge is fundamental to human cognition. Yet, the extent to which it is influenced by language is unclear. Studies of semantic processing show that similar neural patterns are evoked by the same concepts presented in different modalities (e.g., spoken words and pictures or text) [1, 2, 3]. This suggests that conceptual representations are “modality independent.” However, an alternative possibility is that the similarity reflects retrieval of common spoken language representations. Indeed, in hearing spoken language users, text and spoken language are co-dependent [4, 5], and pictures are encoded via visual and verbal routes [6]. A parallel approach investigating semantic cognition shows that bilinguals activate similar patterns for the same words in their different languages [7, 8]. This suggests that conceptual representations are “language independent.” However, this has only been tested in spoken language bilinguals. If different languages evoke different conceptual representations, this should be most apparent comparing languages that differ greatly in structure. Hearing people with signing deaf parents are bilingual in sign and speech: languages conveyed in different modalities. Here, we test the influence of modality and bilingualism on conceptual representation by comparing semantic representations elicited by spoken British English and British Sign Language in hearing early, sign-speech bilinguals. We show that representations of semantic categories are shared for sign and speech, but not for individual spoken words and signs. This provides evidence for partially shared representations for sign and speech and shows that language acts as a subtle filter through which we understand and interact with the world.

Highlights

  • We found reliable within-modality distances in six clusters (Figure 2A): (1) in bilateral V1–V3 and the lateral occipital complex (LOC) (À14 À96 10); (2) the right anterior superior temporal gyrus (58 À4 À2); (3) the left anterior superior and middle tempora

  • The results suggest that semantic category structure drives similarity between sign and speech in left pMTG/ITG

  • Sign-Specific Responses Five regions showed greater representational distances for sign than speech: (1) a cluster spreading across left V1–V3 (À6 À98 16); (2) a cluster within right V1–V3 (22 À90 16); (3) a cluster in the left LOC and middle temporal visual area (MT)/V5 (À44 À80 À6); (4) left superior occipital gyrus and superior parietal lobule (À10 À84 42); and (5) left lingual gyrus spreading to the cerebellum (À4 À48 À8) (Figure 4A; Table S2)

Read more

Summary

RESULTS

Conceptual knowledge is fundamental to human cognition. Yet, the extent to which it is influenced by language is unclear. Sign-Specific Responses Five regions showed greater representational distances for sign than speech: (1) a cluster spreading across left V1–V3 (À6 À98 16); (2) a cluster within right V1–V3 (22 À90 16); (3) a cluster in the left LOC and MT/V5 (À44 À80 À6); (4) left superior occipital gyrus and superior parietal lobule (À10 À84 42); and (5) left lingual gyrus spreading to the cerebellum (À4 À48 À8) (Figure 4A; Table S2) Activity in these regions was not consistent with sign-specific semantic representations, as the categorybased model was not a significant fit in any region (all p > 0.037) after adjusting alpha to p < 0.010 for five clusters/tests. Activity patterns were characterized by a fit to the semantic feature model (both p < 3.10 3 10À5) but driven by item-based encoding (p < 1.34 3 10À7) with additional sensitivity to signer identity (both p < 3.07 3 10À6; Figure 4), consistent with sign form representations

DISCUSSION
Findings
B Participants d METHOD DETAILS
METHOD DETAILS
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call