Abstract

The world around us appears stable in spite of our constantly moving head, eyes, and body. How this is achieved by our brain is hardly understood and even less so in the auditory domain. Using electroencephalography and the so-called mismatch negativity, we investigated whether auditory space is encoded in an allocentric (referenced to the environment) or craniocentric representation (referenced to the head). Fourteen subjects were presented with noise bursts from loudspeakers in an anechoic environment. Occasionally, subjects were cued to rotate their heads and a deviant sound burst occurred, that deviated from the preceding standard stimulus either in terms of an allocentric or craniocentric frame of reference. We observed a significant mismatch negativity, i.e., a more negative response to deviants with reference to standard stimuli from about 136 to 188 ms after stimulus onset in the craniocentric deviant condition only. Distributed source modeling with sLORETA revealed an involvement of lateral superior temporal gyrus and inferior parietal lobule in the underlying neural processes. These findings suggested a craniocentric, rather than allocentric, representation of auditory space at the level of the mismatch negativity.

Highlights

  • In everyday life, we permanently move our body, head, and eyes while perceiving the environment with our different senses

  • There is a substantial body of evidence, primarily based on single-neuron studies in various animal species, that auditory and visual spatial information is integrated in the brain and the alignment of sensory coordinates in an oculocentric frame of reference is maintained with eye movements using an eye-position signal [3]

  • Altmann et al [9] observed a significant mismatch negativity (MMN) for the craniocentric but not for an allocentric deviant after horizontal head rotations. These results argued in favour of a craniocentric representation of auditory space at the level of the MMN

Read more

Summary

Introduction

We permanently move our body, head, and eyes while perceiving the environment with our different senses. The position of a light and sound emitting object located in extrapersonal space is simultaneously estimated (1) by visual information received by the eyes moving in their orbits (that is, in oculocentric coordinates) and (2) by auditory information (namely interaural differences in time and level as well as monaural spectral cues [1]) received by the two ears and referenced to the head (that is, in craniocentric coordinates), while both these types of spatial information change with respect to the body, which again moves in the environment. There is a substantial body of evidence, primarily based on single-neuron studies in various animal species, that auditory and visual spatial information is integrated in the brain (for review, see [2]) and the alignment of sensory coordinates in an oculocentric (eye-centered) frame of reference is maintained with eye movements using an eye-position signal [3]. While auditory-visual spatial integration is known to take place as early as at the level of subcortical structures, namely in superior colliculus [2], the posterior parietal cortex (PPC) has been suggested as the primary locus where the different coordinate frames of the various input signals are combined into common, distributed spatial representations, and where neural activities within these representations are related to higher-order spatial and non-spatial cognitive functions [4]

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call