Abstract

Auditory perception is facilitated by prior knowledge about the statistics of the acoustic environment. Predictions about upcoming auditory stimuli are processed at various stages along the human auditory pathway, including the cortex and midbrain. Whether such auditory predictions are processed also at hierarchically lower stages—in the peripheral auditory system—is unclear. To address this question, we assessed outer hair cell (OHC) activity in response to isochronous tone sequences and varied the predictability and behavioral relevance of the individual tones (by manipulating tone-to-tone probabilities and the human participants’ task, respectively). We found that predictability alters the amplitude of distortion-product otoacoustic emissions (DPOAEs, a measure of OHC activity) in a manner that depends on the behavioral relevance of the tones. Simultaneously recorded cortical responses showed a significant effect of both predictability and behavioral relevance of the tones, indicating that their experimental manipulations were effective in central auditory processing stages. Our results provide evidence for a top-down effect on the processing of auditory predictability in the human peripheral auditory system, in line with previous studies showing peripheral effects of auditory attention.

Highlights

  • Many socially relevant sounds in our natural environment arise from acoustic signals that have characteristic, regular spectral-temporal structures

  • The distortion-product OAEs (DPOAEs) level and noise floor were on average −15.4 ± 6.0 dB SPL and −28.7 ± 5.4 dB SPL, which corresponds to an average DPOAE signal-to-noise ratio (SNR) of 13.4 ± 4.8 dB, a lower value than individual otoacoustic emissions (OAEs) SNR values (>20 dB) reported in previous attention studies using acoustic medial olivocochlear (MOC)-reflex elicitors (e.g., Beim et al, 2018)

  • When the tones were task-relevant, DPOAEs elicited by highly predictable tones were on average 0.54 ± 0.24 dB stronger than those elicited by less predictable tones

Read more

Summary

Introduction

Many socially relevant sounds in our natural environment arise from acoustic signals that have characteristic, regular spectral-temporal structures. The melody and rhythm of music, for instance, arise from specific spectral and temporal relations among the individual notes. Such a regular structure renders the constituent acoustic elements more predictable (in both time and spectral content), and human listeners can exploit this predictability to process and perceive the acoustic input more effectively. The common view (e.g., Clark, 2013) is that the brain aims to match ‘bottom-up’ acoustic input with ‘top-down’ auditory predictions at multiple levels of the auditory processing hierarchy by generating and dynamically updating the neural activity patterns that upcoming

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call