There is a dearth of literature in determining whether language groups for whom aspiration and/or voicing is phonologically contrastive show better perception relative to those who do not use these features contrastively and whether the cue type modulates perception in noise. This study addresses perception of laryngeal cues (voicing and aspiration) by Hindi, English, and Tamil listeners, in quiet and in noise. Sixteen participants between 20 and 45 years of age were included in each of the three language groups. The stimuli were bilabial stops that contrasted phonetically in voicing and aspiration, voicing-lead [ba], short-lag [pa], and long-lag aspirated [pha], with one set corresponding to the Hindi phonemes /ba/, /pa/, and /pha/ and the second set to the English phonemes /ba/ and /pa/ (which are phonetically [pa] and [pha], respectively). Tamil includes only the short-lag [pa] as a bilabial stop consonant. The stimuli were presented at 70 dB SPL, in quiet and in speech-shaped noise at a signal-to-noise ratio of 0. Participants performed two speech identification tasks and a speech discrimination task. Patterns of perceptual assimilation related to the first language were observed in all three language groups, and accuracy was generally higher in quiet than in noise. Hindi participants identified the English /pa/ as Hindi /pha/ and English /ba/ as Hindi /pa/. The American English participants identified Hindi /pha/ as English /pa/ and both the Hindi /pa/ and Hindi /ba/ as English /ba/. In contrast, Tamil listeners generally perceived both Hindi and English bilabial stops as one category, regardless of voicing and aspiration. English and Hindi participants generally showed higher accuracy for native language stimuli. Patterns of assimilation in quiet and noise differed across language groups for each stimulus type. The aspirated stimuli were most likely to be misperceived in noise by all groups (often as /ha/). The results serve as evidence that listeners accurately access native language speech cues, even in noise. The results contribute toward a better understanding of cross-linguistic speech processing in noise.
Read full abstract