Abstract
Perceptual saliency is a precursor to bottom-up attention modeling. While visual saliency models are approaching maturity, auditory models remain in their infancy. This is mainly due to the lack of robust methods to gather basic data, and oversimplifications such as an assumption of monaural signals. Here we present the rationale and initial results of a newly designed experimental paradigm, testing for auditory saliency of natural sounds in a binaural listening scenario. Our main goal is to explore the idea that the saliency of a sound depends on its relation to background sounds by using more than one sound at a time, presented against different backgrounds. An analysis of the relevant, emerging acoustical correlates together with other descriptors is performed. A review of current auditory saliency models and the deficiencies of conventional testing approaches are provided. These motivate the development of our experimental test bed and more formalized stimulus selection criteria to support more versatile and ecologically relevant saliency models. Applications for auditory scene analysis and sound synthesis are briefly discussed. Some initial conclusions are drawn about the definition of an expanded feature set to be used for auditory saliency modeling and prediction in the context of natural, everyday sounds.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.