Abstract

Speech is a complex auditory stimulus which is processed according to several time-scales. Whereas consonant discrimination is required to resolve rapid acoustic events, voice perception relies on slower cues. Humans, right from preterm ages, are particularly efficient to encode temporal cues. To compare the capacities of preterms to those observed in other mammals, we tested anesthetized adult rats by using exactly the same paradigm as that used in preterm neonates. We simultaneously recorded neural (using ECoG) and hemodynamic responses (using fNIRS) to series of human speech syllables and investigated the brain response to a change of consonant (ba vs. ga) and to a change of voice (male vs. female). Both methods revealed concordant results, although ECoG measures were more sensitive than fNIRS. Responses to syllables were bilateral, but with marked right-hemispheric lateralization. Responses to voice changes were observed with both methods, while only ECoG was sensitive to consonant changes. These results suggest that rats more effectively processed the speech envelope than fine temporal cues in contrast with human preterm neonates, in whom the opposite effects were observed. Cross-species comparisons constitute a very valuable tool to define the singularities of the human brain and species-specific bias that may help human infants to learn their native language.

Highlights

  • Cross-species comparisons are crucial to understand the brain specificities of each species

  • Hemodynamic responses were locally evoked in the somatosensory cortex with a maximal response recorded on channel 7 (Fig 2E)

  • Since functional nearinfrared spectroscopy (fNIRS) probes the inter-electroptode cortical area, the maximum neural response on electrode 10 with inversion of polarity on electrode 7 is congruent with the maximum fNIRS response located on the same channel 7

Read more

Summary

Introduction

Cross-species comparisons are crucial to understand the brain specificities of each species. Humans are efficient in terms of oral communication, but the neural architecture underlying this behavior has not been fully elucidated. Humans identify two main types of information during verbal exchanges. They use voice particularities to recognize other members of the group, in the same way as many other species, but they produce and understand complex and novel messages by means of a productive combinatorial system of elementary bricks, the phonemes. Phonetic and voice representations are progressively elaborated along parallel streams in the superior temporal region beyond the primary auditory.

Objectives
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.