Abstract

The brain combines sounds from the two ears, but what is the algorithm used to achieve this summation of signals? Here we combine psychophysical amplitude modulation discrimination and steady-state electroencephalography (EEG) data to investigate the architecture of binaural combination for amplitude-modulated tones. Discrimination thresholds followed a ‘dipper’ shaped function of pedestal modulation depth, and were consistently lower for binaural than monaural presentation of modulated tones. The EEG responses were greater for binaural than monaural presentation of modulated tones, and when a masker was presented to one ear, it produced only weak suppression of the response to a signal presented to the other ear. Both data sets were well-fit by a computational model originally derived for visual signal combination, but with suppression between the two channels (ears) being much weaker than in binocular vision. We suggest that the distinct ecological constraints on vision and hearing can explain this difference, if it is assumed that the brain avoids over-representing sensory signals originating from a single object. These findings position our understanding of binaural summation in a broader context of work on sensory signal combination in the brain, and delineate the similarities and differences between vision and hearing.

Highlights

  • The brain combines sounds from the two ears, but what is the algorithm used to achieve this summation of signals? Here we combine psychophysical amplitude modulation discrimination and steady-state electroencephalography (EEG) data to investigate the architecture of binaural combination for amplitude-modulated tones

  • Stimuli consisted of a 1-kHz pure-tone carrier, amplitude modulated at a modulation frequency of either 40 Hz or 35 Hz, according to the equation: w(t) = 0.5∗(1 + m∗ cos(fm∗ t∗2π + π)) ∗ sin(fc∗ t∗2π) where fm is the modulation frequency in Hz, fc is the carrier frequency in Hz, t is time in seconds, and m is the modulation depth, with a value from 0–1

  • A single computational model, in which signals from the two ears inhibit each other weakly before being combined, provided the best description of data sets from both experiments. This model architecture originates from work on binocular vision, showing a commonality between these two sensory systems. We discuss these results in the context of related empirical results, previous binaural models, and ecological constraints that differentially affect vision and hearing

Read more

Summary

Introduction

The brain combines sounds from the two ears, but what is the algorithm used to achieve this summation of signals? Here we combine psychophysical amplitude modulation discrimination and steady-state electroencephalography (EEG) data to investigate the architecture of binaural combination for amplitude-modulated tones. The auditory system integrates information across the two ears This operation confers several benefits, including increased sensitivity to low intensity sounds[1] and inferring location and motion direction of sound sources based on interaural time differences[2]. When a carrier stimulus (typically either a pure-tone or broadband noise) is modulated in amplitude, neural oscillations at the modulation frequency can be detected at the scalp[15,16,17,18,19], being typically strongest at the vertex in EEG recordings[20] This steady-state auditory evoked potential (SSAEP) is greatest around 40 Hz20,21 and increases monotonically with increasing modulation depth[17,18]. The SSAEP has been used to study binaural interactions, showing evidence of interaural suppression[23,24] and increased responses from binaurally summed stimuli[17,22,25]

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call