Abstract

The association between temporal-masking patterns, duration, and loudness for broadband noises with ramped and damped envelopes was examined. Duration and loudness matches between the ramped and damped sounds differed significantly. Listeners perceived the ramped stimuli to be longer and louder than the damped stimuli, but the outcome was biased by the stimulus context. Next, temporal-masking patterns were measured for ramped- and damped-broadband noises using three (0.5, 1.5, and 4.0 kHz) 10 ms probe tones presented individually at various temporal delays. Predictions of subjective duration derived from masking results underpredicted the matching results. Loudness estimates derived from models that assume persistence of neural activity after stimulus offset [Glasberg B. R., and Moore, B. C. J. (2002). "A model of loudness applicable to time-varying sounds," J. Audio. Eng. Soc. 50, 331-341; Chalupper, J., and Fastl, H. (2002) "Dynamic loudness model (DLM) for normal and hearing-impaired listeners," Acust. Acta Acust. 88, 378-386] were greater for ramped sounds than for damped sounds and were close to the average results obtained via the matching task. Differences in simultaneous-masked thresholds for these stimuli could not account for the loudness-matching results. Decay suppression of the later occurring portion of the damped stimulus may account for the differences in perception due to the stimulus context; however, a parsimonious implementation of this process that accounts for both subjective duration and loudness judgments remains unclear.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call