Abstract

Publisher Summary This chapter reviews the evidence that speech and music are quite different in terms of their requirements for spectral resolution, suggesting that it is important to understand the demands of the listening task in order to understand the importance of spectral resolution. The number of spectral channels needed depends on the difficulty of the listening task and the situation. Speech recognition, because it is a highly trained pattern recognition process, requires only four spectral channels of envelope information in the appropriate tonotopic place. Six to eight spectral channels are required for speech recognition in noisy listening conditions or for difficult speech materials or for people who are not native listeners in the language. In contrast, music requires at least 16 spectral channels even for identification of simple familiar melodies played with a single stream of notes. Recognition and enjoyment of music that is more complex and music with multiple instruments require at least 64 channels of spectral resolution and possibly many more. This large difference between music and speech highlights the difference in how the brain utilizes information from the auditory periphery. To understand the processing in the auditory system it is important to understand the relative roles of fine detail from the periphery and topdown pattern processing by the brain for different tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call