Abstract

Cutthroat evolution has given us seemingly magical abilities to hear speech in complex environments. For example, we can tell instantly, independent of timbre or loudness, if a sound is close to us. In a crowded room, we can switch attention at will between at least three different simultaneous conversations, and involuntarily switch to one of them if our name is spoken. These feats are only possible if, without conscious attention, each voice has been separated into an independent neural stream. The separation process relies on the phase relationships between the harmonics above 1000 Hz that encode speech information, and the neurology of the inner ear that has evolved to detect them. When phase is undisturbed, once in each fundamental period harmonic phases align to create massive peaks in the sound pressure at the fundamental frequency. Pitch-sensitive filters can detect and separate these peaks from each other and from noise with amazing acuity. But reflections and sound systems randomize phases, with serious effects on attention, source separation, and intelligibility. This paper will describe the many ways ears and speech have co-evolved, and recent work on the importance of phase in acoustics and sound design.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.