Abstract

The auditory processing of consonants was investigated using an information-theoretic approach. Listeners identified eleven different Danish consonants spoken in a Consonant + Vowel + [l] environment. Each syllable was processed so that only a portion of the original audio spectrum was present. Three-quarter-octave bands of speech, with center frequencies of 750, 1500 and 3000 Hz, were presented individually and in combination with each other. Confusion matrices were computed, and from these the amount of information transmitted for each of three phonetic-features - voicing, manner and place of articulation - was computed for each condition. From such analyses one can determine whether information associated with any given phonetic feature combines linearly across the acoustic spectrum or not. Our results indicate that information associated with voicing and manner-of-articulation combines in quasilinear fashion across the frequency spectrum. In contrast, place-of-articulation cues are integrated synergistically - information associated with two or three channels combined is far greater than predicted from the amount of information associated with individual spectral bands. Because consonants are essential for understanding speech, and place-of articulation information is crucial for decoding consonants, spoken language perception is likely to reflect highly non-linear processes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call