Abstract

Salience is the quality of a sensory signal that attracts involuntary attention in humans. While it primarily reflects conspicuous physical attributes of a scene, our understanding of processes underlying what makes a certain object or event salient remains limited. In the vision literature, experimental results, theoretical accounts, and large amounts of eye-tracking data using rich stimuli have shed light on some of the underpinnings of visual salience in the brain. In contrast, studies of auditory salience have lagged behind due to limitations in both experimental designs and stimulus datasets used to probe the question of salience in complex everyday soundscapes. In this work, we deploy an online platform to study salience using a dichotic listening paradigm with natural auditory stimuli. The study validates crowd-sourcing as a reliable platform to collect behavioral responses to auditory salience by comparing experimental outcomes to findings acquired in a controlled laboratory setting. A model-based analysis demonstrates the benefits of extending behavioral measures of salience to broader selection of auditory scenes and larger pools of subjects. Overall, this effort extends our current knowledge of auditory salience in everyday soundscapes and highlights the limitations of low-level acoustic attributes in capturing the richness of natural soundscapes.

Highlights

  • The literature on sensory salience has varied greatly, in terms of appropriate experimental paradigms best suited to shed light on underlying physical, neural, and perceptual underpinnings of salience encoding in the brain

  • Given the perceptual difference between scenes based on their acoustic transience, we further explored the correlation between booth and crowd average behavioral salience separately for dense and sparse DNSS scenes

  • We presented a detailed analysis of salience data for natural scenes, collected on a crowdsourcing platform, using a dichotic listening paradigm

Read more

Summary

INTRODUCTION

That can more efficiently process information in real-life scenarios. The literature on sensory salience has varied greatly, in terms of appropriate experimental paradigms best suited to shed light on underlying physical, neural, and perceptual underpinnings of salience encoding in the brain. A cross-platform comparison is performed on responses from the two settings, as well as salience models derived from the two platforms It extends the selection of stimuli used previously (JHUDNSS) to encompass a wider collection of event types and environments; focusing on acoustically dense scenes that cause more challenges of interpretability and predictability for salience models. It evaluates salience responses derived from a larger and diverse pool of subjects using data-driven salience models. The study aims to expand the frontiers of auditory salience using larger datasets of complex sounds

Behavioral procedure
Behavioral data analysis
Acoustic analysis
Behavioral results
Acoustic features
Event prediction
DISCUSSION
17 DNSS-Ext 2:00
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call