Abstract
Natural sounds contain information on multiple timescales, so the auditory system must analyze and integrate acoustic information on those different scales to extract behaviorally relevant information. However, this multi-scale process in the auditory system is not widely investigated in the literature, and existing models of temporal integration are mainly built upon detection or recognition tasks on a single timescale. Here we use a paradigm requiring processing on relatively ‘local’ and ‘global’ scales and provide evidence suggesting that the auditory system extracts fine-detail acoustic information using short temporal windows and uses long temporal windows to abstract global acoustic patterns. Behavioral task performance that requires processing fine-detail information does not improve with longer stimulus length, contrary to predictions of previous temporal integration models such as the multiple-looks and the spectro-temporal excitation pattern model. Moreover, the perceptual construction of putatively ‘unitary’ auditory events requires more than hundreds of milliseconds. These findings support the hypothesis of a dual-scale processing likely implemented in the auditory cortex.
Highlights
MethodsIn Experiment 1A, two factors, stimulus length and segment duration, were manipulated
To explore mechanisms of temporal integration and test certain predictions of the different integration models, we investigated in a series of four psychophysical experiments the perceptual performance at shorter and longer scales by manipulating the timescale of acoustic information while biasing the auditory processes implementing local or global task demands
When the amount of information extends beyond the capacity of processing within a certain time window, the auditory system ‘summarizes’ the local details and forms a representation of a global pattern
Summary
In Experiment 1A, two factors, stimulus length and segment duration, were manipulated. The factor segment number (similar to stimulus length because we fixed the segment duration at 30 ms) was manipulated in two conditions. In condition 1, each pair of stimuli included one stimulus with mean frequency shifting from 1300 Hz to 1700 Hz, regardless of the number of segments, and the other stimulus with mean frequency constant at 1500 Hz (Fig. 2a). Participants may just pay attention to the first or the last segment to do the tasks instead of resolving acoustic details of the whole stimulus To avoid this potential confound, the sweep directions of the segments in the beginning and the end of the stimuli were the same between two stimuli in both ‘same’ and ‘different’ pairs; the segments in the middle of the stimuli were the same in the ‘same’ pair, and different in the ‘different’ pair. ISIs were varied across four levels: 500, 700, 1000 and 1500 ms
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.