Abstract

Amplitude modulation (AM) and frequency modulation (FM) provide crucial auditory information. If FM is encoded as AM, it should be possible to give a unified account of AM and FM perception both in terms of response consistency and performance. These two aspects of behavior were estimated for normal-hearing participants using a constant-stimuli, forced-choice detection task repeated twice with the same stimuli (double pass). Sinusoidal AM or FM with rates of 2 or 20 Hz were applied to a 500-Hz pure-tone carrier and presented at detection threshold. All stimuli were masked by a modulation noise. Percent agreement of responses across passes and percent-correct detection for the two passes were used to estimate consistency and performance, respectively. These data were simulated using a model implementing peripheral processes, a central modulation filterbank, an additive internal noise, and a template-matching device. Different levels of internal noise were required to reproduce AM and FM data, but a single level could account for the 2- and 20-Hz AM data. As for FM, two levels of internal noise were needed to account for detection at slow and fast rates. Finally, the level of internal noise yielding best predictions increased with the level of the modulation-noise masker. Overall, these results suggest that different sources of internal variability are involved for AM and FM detection at low audio frequencies.

Highlights

  • IntroductionThis general model (hereafter referred to as the “modulation-filterbank model”; for recent implementations, see Jepsen et al, 2008; Jørgensen et al, 2013; Biberger and Ewert, 2016, 2017; Wallaert et al, 2017, 2018; King et al, 2019; Cabrera et al, 2019) postulates that temporalmodulation cues in sounds are transformed into so-called “neural temporal-envelope” cues (i.e., fluctuations in mean firing rate in auditory neurons) and that fine-timing “temporal fine structure” (TFS) cues (i.e., carrier information) are discarded after demodulation achieved by central (post-cochlear) processes

  • This general model postulates that temporalmodulation cues in sounds are transformed into so-called “neural temporal-envelope” cues and that fine-timing “temporal fine structure” (TFS) cues are discarded after demodulation achieved by central processes

  • Response consistency and performance in a modulation-detection task were estimated for young, normal-hearing participants using a double-pass paradigm and sinusoidal Amplitude modulation (AM) and frequency modulation (FM) targets masked by a modulationnoise masker

Read more

Summary

Introduction

This general model (hereafter referred to as the “modulation-filterbank model”; for recent implementations, see Jepsen et al, 2008; Jørgensen et al, 2013; Biberger and Ewert, 2016, 2017; Wallaert et al, 2017, 2018; King et al, 2019; Cabrera et al, 2019) postulates that temporalmodulation cues in sounds are transformed into so-called “neural temporal-envelope” cues (i.e., fluctuations in mean firing rate in auditory neurons) and that fine-timing “temporal fine structure” (TFS) cues (i.e., carrier information) are discarded after demodulation achieved by central (post-cochlear) processes This type of model incorporates an important source of “inefficiency” in temporalmodulation processing: a Gaussian internal noise added to the representation of temporal-envelope cues at the output of modulation filters.

Objectives
Methods
Findings
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call