Abstract

Our acoustic environment contains a plethora of complex sounds that are often in motion. To gauge approaching danger and communicate effectively, listeners need to localize and identify sounds, which includes determining sound motion. This study addresses which acoustic cues impact listeners’ ability to determine sound motion. Signal envelope (ENV) cues are implicated in both sound motion tracking and stimulus intelligibility, suggesting that these processes could be competing for sound processing resources. We created auditory chimaera from speech and noise stimuli and varied the number of frequency bands, effectively manipulating speech intelligibility. Normal-hearing adults were presented with stationary or moving chimaeras and reported perceived sound motion and content. Results show that sensitivity to sound motion is not affected by speech intelligibility, but shows a clear difference for original noise and speech stimuli. Further, acoustic chimaera with speech-like ENVs which had intelligible content induced a strong bias in listeners to report sounds as stationary. Increasing stimulus intelligibility systematically increased that bias and removing intelligible content reduced it, suggesting that sound content may be prioritized over sound motion. These findings suggest that sound motion processing in the auditory system can be biased by acoustic parameters related to speech intelligibility.

Highlights

  • Our acoustic environment contains a plethora of complex sounds that are often in motion

  • Two stimulus types were added as control: (1) to test whether sound motion perception is affected differently for chimaera and non-chimaera stimuli, we added the original stimuli for each ENV type; (2) to test whether sound motion perception is affected differently when the ENV type remains, but the stimulus content becomes unintelligible, we added a reversed 16-band chimaera for each ENV type

  • A few studies have shown that listeners are able to localize stationary speech sounds in the horizontal plane as accurately as non-speech broadband s­ timuli[26,27,36], and recent work showed that there is no difference in localization accuracy for a stationary acoustic chimaera with speech-like or noise-like E­ NVs24

Read more

Summary

Introduction

Our acoustic environment contains a plethora of complex sounds that are often in motion. For TFS speech cues, investigated by amplitude-modulating a speech TFS carrier using the ENV of a noise token, an increase in the number of frequency bands decreases speech i­ntelligibility[8] While this describes an important dichotomy between ENV and TFS ­cues[6,8,13,14], a body of work has shown that their independent impacts are challenging to evaluate, because the signal ENV can be reconstructed at the output of the auditory filters, even when it has been physically removed in the processing of a TFS speech s­ timulus[15,16,17,18]. A stationary bias could be governed by stimulus ENV, independently of the intelligibility of stimulus content, or a combination of these two components

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call