Abstract

Perceptual integration of the sound directly emanating from the source with reflections needs both temporal storage and correlation computation of acoustic details. We examined whether the temporal storage is frequency dependent and associated with speech unmasking. In Experiment 1, a break in correlation (BIC) between interaurally correlated wideband or narrowband noises was detectable even when an interaural interval (IAI) was introduced. The longest IAI, which varied markedly across participants, could be up to about 20 ms for wideband noise and decreased as the center frequency was increased for narrowband noises. In Experiment 2, when the interval between target speech and its single-reflection simulation (intertarget interval [ITI]) was reduced from 64 to 0 ms, intelligibility of target speech was markedly improved under speech-masking but not noise-masking conditions. The longest effective ITI correlated with the longest IAI for detecting the BIC only in the low-frequency (<or=400 Hz) narrowband noise. Thus the ability to temporally store fine details contributes to perceptual integration of correlated leading and lagging sounds, which in turn, contributes to releasing speech from informational masking in noisy, reverberant environments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call