Abstract

Auditory scene analysis is a demanding computational process that is performed automatically and efficiently by the healthy brain but vulnerable to the neurodegenerative pathology of Alzheimer's disease. Here we assessed the functional neuroanatomy of auditory scene analysis in Alzheimer's disease using the well-known ‘cocktail party effect’ as a model paradigm whereby stored templates for auditory objects (e.g., hearing one's spoken name) are used to segregate auditory ‘foreground’ and ‘background’. Patients with typical amnestic Alzheimer's disease (n = 13) and age-matched healthy individuals (n = 17) underwent functional 3T-MRI using a sparse acquisition protocol with passive listening to auditory stimulus conditions comprising the participant's own name interleaved with or superimposed on multi-talker babble, and spectrally rotated (unrecognisable) analogues of these conditions. Name identification (conditions containing the participant's own name contrasted with spectrally rotated analogues) produced extensive bilateral activation involving superior temporal cortex in both the AD and healthy control groups, with no significant differences between groups. Auditory object segregation (conditions with interleaved name sounds contrasted with superimposed name sounds) produced activation of right posterior superior temporal cortex in both groups, again with no differences between groups. However, the cocktail party effect (interaction of own name identification with auditory object segregation processing) produced activation of right supramarginal gyrus in the AD group that was significantly enhanced compared with the healthy control group. The findings delineate an altered functional neuroanatomical profile of auditory scene analysis in Alzheimer's disease that may constitute a novel computational signature of this neurodegenerative pathology.

Highlights

  • Decoding the auditory world poses a formidable problem of neural computation

  • There was a significant interaction between group and test type (F(1,29) = 9.29, p = 0.005): these results were driven by poorer performance of the Alzheimer3s disease (AD) group than the healthy control group on the auditory segregation detection task (t = 3.61, p = 0.001)

  • We have shown that the functional neuroanatomy of auditory scene analysis is altered in AD compared to healthy older individuals

Read more

Summary

Introduction

Decoding the auditory world poses a formidable problem of neural computation. Our brains normally solve this problem efficiently and automatically but the neural basis of ‘auditory scene analysis’ remains incompletely understood. The cocktail party effect is a celebrated example of a much wider category of auditory phenomena that depend on generic computational processes that together segregate an acoustic target or ‘foreground’ sound from. Lexical processes may modulate auditory scene analysis, perhaps via template matching algorithms (Billig et al, 2013; Griffiths and Warren, 2002) as well as additional parietal and prefrontal mechanisms engaging in speech in noise processing, under conditions of increased attentional demand (Binder et al, 2004; Davis et al, 2011; Nakai et al, 2005; Scott et al, 2004; Scott and McGettigan, 2013)

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call