Abstract

Echolocating bats can identify three-dimensional objects exclusively through the analysis of acoustic echoes of their ultrasonic emissions. However, objects of the same structure can differ in size, and the auditory system must achieve a size-invariant, normalized object representation for reliable object recognition. This study describes both the behavioral classification and the cortical neural representation of echoes of complex virtual objects that vary in object size. In a phantom-target playback experiment, it is shown that the bat Phyllostomus discolor spontaneously classifies most scaled versions of objects according to trained standards. This psychophysical performance is reflected in the electrophysiological responses of a population of cortical units that showed an object-size invariant response (14/109 units, 13%). These units respond preferentially to echoes from objects in which echo duration (encoding object depth) and echo amplitude (encoding object surface area) co-varies in a meaningful manner. These results indicate that at the level of the bat's auditory cortex, an object-oriented rather than a stimulus-parameter–oriented representation of echoes is achieved.

Highlights

  • For both the visual and the auditory domain, the formation of perceptual objects from physical stimuli is an essential task

  • It is hypothesized that the auditory cortex segregates auditory objects depending on the auditory background, i.e., it adjusts its sensitivity for the boundaries of auditory objects along both the auditory time and frequency axes based on the spectrotemporal fluctuation statistics of the auditory background [4]

  • Human psychophysical studies have shown that information about speaker size is well preserved in human speech, that the human auditory system can segregate size information from information about the content, and that the auditory system can compensate for the effect of speaker size on perceived speech [8]: The same vowel pronounced by an adult and a child differs dramatically in its spectral content

Read more

Summary

Introduction

For both the visual and the auditory domain, the formation of perceptual objects from physical stimuli is an essential task. Human psychophysical studies have shown that information about speaker size is well preserved in human speech, that the human auditory system can segregate size information from information about the content, and that the auditory system can compensate for the effect of speaker size on perceived speech [8]: The same vowel pronounced by an adult and a child differs dramatically in its spectral content. It is readily perceived as the same vowel. In an fMRI study, von Kriegstein et al [11] showed that information about the vocal-tract length of a speaker, as an acoustic marker of body size, may be processed as early as the auditory thalamus and that an interaction between a voice’s fundamental frequency (which can mediate size information) may occur in nonprimary auditory cortex

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.