Abstract

The testing of sound designs often involves only expert evaluations; training is commonly required when non-experts participate, which can alter the listening experience. This paper presents a method of evaluating sound designs for radio and audio logos that avoids listener training. Sound designs incorporating sound effects, music or dialogue can be broken down into discrete sound events that can then be rated using attributes of sound meaningful to both designers and listeners. Two examples are discussed, a radio drama, and a set of audio logos. Both of which were tested using a repertory grid approach. The paper shows that the method can highlight similarities and differences between designer and participant listening experiences. Comparing listening experiences could allow designers to be more confident about the reception of their sound designs.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.