Abstract

Within the information processing approach to speech perception, the problem of acoustic-phonetic variability is solved by positing multiple stages of processing that transform the acoustic input into a stable phonetic percept. If this problem is viewed as part of the broader problem of perceptual normalization, a question that arises is whether the mechanisms that normalize speech stimuli also normalize nonspeech stimuli. Five experiments addressed this question by investigating the normalization of musical timbre using a selective adaptation paradigm. The results parallel some of those found with speech, suggesting that a single mechanism normalizes both types of stimuli.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call