Abstract
Different patterns of performance across vowels and consonants in tests of categorization and discrimination indicate that vowels tend to be perceived more continuously, or less categorically, than consonants. The present experiments examined whether analogous differences in perception would arise in nonspeech sounds that share critical transient acoustic cues of consonants and steady-state spectral cues of simplified synthetic vowels. Listeners were trained to categorize novel nonspeech sounds varying along a continuum defined by a steady-state cue, a rapidly-changing cue, or both cues. Listeners' categorization of stimuli varying on the rapidly changing cue showed a sharp category boundary and posttraining discrimination was well predicted from the assumption of categorical perception. Listeners more accurately discriminated but less accurately categorized steady-state nonspeech stimuli. When listeners categorized stimuli defined by both rapidly-changing and steady-state cues, discrimination performance was accurate and the categorization function exhibited a sharp boundary. These data are similar to those found in experiments with dynamic vowels, which are defined by both steady-state and rapidly-changing acoustic cues. A general account for the speech and nonspeech patterns is proposed based on the supposition that the perceptual trace of rapidly-changing sounds decays faster than the trace of steady-state sounds.
Highlights
Patterns of performance in categorization and discrimination tasks differ across classes of speech sounds
We predict that nonspeech sounds that are defined by acoustic cues that reflect these differences will elicit the same patterns of categorization and discrimination performance as stop consonants and synthetic steady-state vowels
The relatively short training procedure used in this experiment was sufficient for participants to learn to categorize stimuli according to onset/offset ramp length
Summary
Patterns of performance in categorization and discrimination tasks differ across classes of speech sounds. We hypothesize that the differences in categorization and discrimination patterns arise as a result of differences in the way the auditory system processes the differing acoustic cues that distinguish vowels and consonants. We suggest that the rapid transients characteristic of many consonants are processed quite differently than the relatively steady-state frequency information that characterizes steady-state vowel and fricative stimuli. From this hypothesis, we predict that nonspeech sounds that are defined by acoustic cues that reflect these differences will elicit the same patterns of categorization and discrimination performance as stop consonants and synthetic steady-state vowels. Before turning to the experiments, we discuss in more detail the evidence for the points motivating our experiments
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.