Abstract

We present here the first neuroimaging data for perception of Cued Speech (CS) by deaf adults who are native users of CS. CS is a visual mode of communicating a spoken language through a set of manual cues which accompany lipreading and disambiguate it. With CS, sublexical units of the oral language are conveyed clearly and completely through the visual modality without requiring hearing. The comparison of neural processing of CS in deaf individuals with processing of audiovisual (AV) speech in normally hearing individuals represents a unique opportunity to explore the similarities and differences in neural processing of an oral language delivered in a visuo-manual vs. an AV modality. The study included deaf adult participants who were early CS users and native hearing users of French who process speech audiovisually. Words were presented in an event-related fMRI design. Three conditions were presented to each group of participants. The deaf participants saw CS words (manual + lipread), words presented as manual cues alone, and words presented to be lipread without manual cues. The hearing group saw AV spoken words, audio-alone and lipread-alone. Three findings are highlighted. First, the middle and superior temporal gyrus (excluding Heschl’s gyrus) and left inferior frontal gyrus pars triangularis constituted a common, amodal neural basis for AV and CS perception. Second, integration was inferred in posterior parts of superior temporal sulcus for audio and lipread information in AV speech, but in the occipito-temporal junction, including MT/V5, for the manual cues and lipreading in CS. Third, the perception of manual cues showed a much greater overlap with the regions activated by CS (manual + lipreading) than lipreading alone did. This supports the notion that manual cues play a larger role than lipreading for CS processing. The present study contributes to a better understanding of the role of manual cues as support of visual speech perception in the framework of the multimodal nature of human communication.

Highlights

  • There is increasing evidence that sensory-deprived individuals make adjustments to their sensory loss in order to interact effectively within their environment

  • Is MT/V5 Involved in Speech Processing? Psychophysiological Interaction Analysis (PPI) in MT/V5 for Cued Speech labial + manual (CSLM) and Speech AV Word Processing The results shown above indicate that besides large commonalities, the neural basis of speech perception in CSLM in Cued Speech (CS) participants is shifted toward posterior regions of the brain as compared to speech AV in NH participants, with activation peaks in the occipito-temporal junction and MT/V5

  • We report results from the first neuroimaging study of CS processing, a mode of communication in which the syllables and phonemes of a spoken language are conveyed solely through the visual modality in the absence of either speech or hearing

Read more

Summary

Introduction

There is increasing evidence that sensory-deprived individuals make adjustments to their sensory loss in order to interact effectively within their environment. For people who are deaf from birth or lost their audition early in life, neural plasticity of the regions classically associated with auditory and speech sound processing is related to lack of auditory experience and to the timing and nature of language experience (Cardin et al, 2013). Olulade et al (2014) suggested that the nature of language experience (signed vs oral) has an impact on the development of gray matter volume in the cerebral regions processing language measured in deaf adults, but this point remains to be confirmed

Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call