Abstract

There has been a great deal of research on the cues for phonemes in isolated syllables, but relatively little research has been performed on the perceptibility of different phonetic features in ongoing speech. In order to investigate this problem we developed a “listening for mispronunciation” (LM) paradigm. In the LM paradigm, subjects are presented with a prose passage in which various words are mispronounced. Subjects are instructed to press a response key as quickly as possible whenever they detect a mispronunciation. Since mispronunciations can be produced by varying a single phonetic feature in a word, the LM paradigm provides a technique for determining the relative perceptibility of different phonetic features. In the first experiment we compared the relative perceptibility of voicing and place of articulation for word-initial-stop consonants. Two versions of a short story were recorded. Words changed by the voicing feature in one version of the story (e.g., BOY to POY) were changed by place of articulation in the other version (e.g., BOY to DOY). Since the same word was changed in both versions, all contextual influences were held constant. The results showed significant differences in both the speed and accuracy with which listeners detected changes in voicing and place of articulation. In addition, a number of interesting asymmetries were observed, e.g., a change from [b] to [p] was not equivalent to a change from [p] to [b].

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call