Abstract
Methods employed to determine hemispheric language dominance using magnetoencephalography (MEG) have differed significantly across studies in the choice of language-task, the nature of the physiological response studied, recording hardware, and source modeling methods. Our goal was to determine whether an analysis based on distributed source modeling can replicate the results of prior studies that have used dipole-modeling of event-related fields (ERFs) generated by an auditory word-recognition task to determine language dominance in patients with epilepsy.We analyzed data from 45 adult patients with drug-resistant partial epilepsy who performed an auditory word-recognition task during MEG recording and also completed a language fMRI study as part of their evaluation for epilepsy surgery. Source imaging of auditory ERFs was performed using dynamic statistical parametric mapping (dSPM). Language laterality indices (LIs) were calculated for four regions of interest (ROIs) by counting above-threshold activations within a 300–600 ms time window after stimulus onset. Language laterality (LL) classifications based on these LIs were compared to the results from fMRI.The most lateralized MEG responses to language stimuli were observed in a parietal region that included the angular and supramarginal gyri (AngSmg). In this region, using a half-maximal threshold, source activations were left dominant in 32 (71%) patients, right dominant in 8 (18%), and symmetric in 5 patients (11%). The best agreement between MEG and fMRI on the ternary classification of regional language dominance into left, right, or symmetric groups was also found at the AngSmg ROI (69%). This was followed by the whole-hemisphere and temporal ROIs (both 62%). The frontal ROI showed the least agreement with fMRI (51%). Gross discordances between MEG and FMRI findings were disproportionately of the type where MEG favored atypical right-hemispheric language in a patient with right-hemispheric seizure origin (p < 0.05 at three of the four ROIs).In a parietal region that includes the angular and supramarginal gyri, language laterality estimates based on dSPM of ERFs during auditory word-recognition shows a degree of MEG-fMRI concordance that is comparable to previously published estimates for MEG-Wada concordance using dipole counting methods and the same task. Our data also suggest that MEG language laterality estimates based on this task may be influenced by the laterality of epileptic networks in some patients. This has not been reported previously and deserves further study.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.