Abstract

Human listeners must identify and orient themselves to auditory objects in their environment. What acoustic features support a listener’s ability to differentiate the variety of sound sources they might encounter? Typical studies of auditory object perception obtain dissimilarity ratings between pairs of objects, often within a single category of sound. However, such an approach precludes an understanding of general acoustic features that might be used to differentiate sounds across categories. The present experiment takes a broader approach to the analysis of dissimilarity ratings by leveraging the acoustic variability within and between different sound categories as characterized by a large, diverse set of 36 sound tokens (12 speech utterances from different speakers, 12 instrument timbres, and 12 everyday objects from a typical human environment). We analyze multidimensional scaling results as well as models of trial- level dissimilarity ratings as a function of different acoustic representations including spectral, temporal and noise features as well as modulation power spectra and cochlear spectrograms. In addition to previously noted differences in spectral and temporal envelopes, results indicate that listener’s dissimilarity ratings are also related to spectral variability and noise, particularly in differentiating sounds between categories. Dissimilarity ratings also appear to closely parallel sound identification performance.Human listeners must identify and orient themselves to auditory objects in their environment. What acoustic features support a listener’s ability to differentiate the variety of sound sources they might encounter? Typical studies of auditory object perception obtain dissimilarity ratings between pairs of objects, often within a single category of sound. However, such an approach precludes an understanding of general acoustic features that might be used to differentiate sounds across categories. The present experiment takes a broader approach to the analysis of dissimilarity ratings by leveraging the acoustic variability within and between different sound categories as characterized by a large, diverse set of 36 sound tokens (12 speech utterances from different speakers, 12 instrument timbres, and 12 everyday objects from a typical human environment). We analyze multidimensional scaling results as well as models of trial- level dissimilarity ratings as a function of different acoustic representations inclu...

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call