Abstract
Many audio applications perform perception-based time-frequency (TF) analysis by decomposing sounds into a set of functions with good TF localization (i.e. with a small essential support in the TF domain) using TF transforms and applying psychoacoustic models of auditory masking to the transform coefficients. To accurately predict masking interactions between coefficients, the TF properties of the model should match those of the transform. This involves having masking data for stimuli with good TF localization. However, little is known about TF masking for mathematically well-localized signals. Most existing masking studies used stimuli that are broad in time and/or frequency and few studies involved TF conditions. Consequently, the present study had two goals. The first was to collect TF masking data for well-localized stimuli in humans. Masker and target were 10-ms Gaussian-shaped sinusoids with a bandwidth of approximately one critical band. The overall pattern of results is qualitatively similar to existing data for long maskers. To facilitate implementation in audio processing algorithms, a dataset provides the measured TF masking function. The second goal was to assess the potential effect of auditory efferents on TF masking using a modeling approach. The temporal window model of masking was used to predict present and existing data in two configurations: (1) with standard model parameters (i.e. without efferents), (2) with cochlear gain reduction to simulate the activation of efferents. The ability of the model to predict the present data was quite good with the standard configuration but highly degraded with gain reduction. Conversely, the ability of the model to predict existing data for long maskers was better with than without gain reduction. Overall, the model predictions suggest that TF masking can be affected by efferent (or other) effects that reduce cochlear gain. Such effects were avoided in the experiment of this study by using maximally-compact stimuli.
Highlights
It is of great interest in audio applications to take human auditory perception into account in the signal processing chain
To obtain a perceptually motivated TF analysis, one can choose a set of atoms whose duration and bandwidth approximate the time and frequency resolution of the human auditory system and/or apply a psychoacoustic model of auditory masking to the coefficients of the transform
Sparsity-based approaches combine TF decompositions and masking models to reduce the amount of nonzero TF coefficients [8, 9]
Summary
It is of great interest in audio applications to take human auditory perception into account in the signal processing chain. This generally consists in performing a perceptually motivated time-frequency (TF) analysis of the signal. To obtain a perceptually motivated TF analysis, one can choose a set of atoms whose duration and bandwidth approximate the time and frequency resolution of the human auditory system To reduce the digital size of audio files, audio codecs like mp decompose sounds into TF segments (ideally a transform approximating the auditory frequency resolution is used like in [5]) and apply a masking model to reduce the bit rates in these segments Source separation algorithms estimate binary masks to weight the TF coefficients of sound mixtures based on auditory masking in order to separate the signal(s) of interest [12, 13]
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.