Abstract

Current acoustic measurements techniques typically require time and heavy equipment. The effort and expense would be justified if the results accurately predicted the sound quality from a performance at an individual seat, but they do not. If one believes that it is possible to hear and accurately report sound quality from live sound in different seats, then it must be possible to measure it this way. This paper describes a model of human hearing that promises to provide this ability, at least for quality aspects relating to clarity. The model is based on the ease with which information is carried from one or more sources to a listener. For speech, this involves both reverberant masking from late reverberation, and interference to the direct sound from early reflections. To make such a measurement, we need to model how the ear and brain system precisely localizes sound sources, and separates their sound from each other and acoustic interference without added information from context, grammar, or prior knowledge. A description of such a model will be presented, along with results obtained from live sound.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.