Abstract
In humans, understanding a voice amidst competing sounds depends on parsing the sound mixture into “streams” representing each source's content. Streaming can be influenced by top-down attentional focus, while acoustic features can affect streaming percepts through bottom-up, automatic processing of pitch, timbre, and location. Dolphins regularly navigate a cacophony of echoes generated during echolocation, conspecific echolocation clicks, and other, passively received sounds (eg. communication signals, environmental noise), and thus, may rely on streaming. Using auditory evoked potentials (AEPs), we asked whether dolphins exhibit evidence of bottom-up, frequency-based stream segregation. Initial results using a classic A-B-A sequence of repeated tone triplets (either low-high-low or high-low-high tones) suggest that the triplets break apart perceptually into low and high streams; specifically, the AEP magnitude evoked by the middle tone increases with increasing frequency separation. Differences in dolphin hearing sensitivity across the frequencies tested seem to account for some of the frequency manipulation effect. Additionally, earlier studies suggest that the dolphin auditory temporal integration window decreases as frequency increases, which may interact with streaming processes. This lays a foundation for future tests of top-down effects on dolphin auditory stream processing such as attention and expectation.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.