Abstract

Perceptual saliency is a precursor to bottom-up attention modeling. While visual saliency models are approaching maturity, auditory models remain in their infancy. This is mainly due to the lack of robust methods to gather basic data, and oversimplifications such as an assumption of monaural signals. Here we present the rationale and initial results of a newly designed experimental paradigm, testing for auditory saliency of natural sounds in a binaural listening scenario. Our main goal is to explore the idea that the saliency of a sound depends on its relation to background sounds by using more than one sound at a time, presented against different backgrounds. An analysis of the relevant, emerging acoustical correlates together with other descriptors is performed. A review of current auditory saliency models and the deficiencies of conventional testing approaches are provided. These motivate the development of our experimental test bed and more formalized stimulus selection criteria to support more versatile and ecologically relevant saliency models. Applications for auditory scene analysis and sound synthesis are briefly discussed. Some initial conclusions are drawn about the definition of an expanded feature set to be used for auditory saliency modeling and prediction in the context of natural, everyday sounds.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call