Abstract

SummaryBrain data processing has been embracing the big data era driven by the rapid advances of neuroscience as well as the experimental techniques for recording neuronal activities. Processing of massive brain data has become a constant in neuroscience research and practice, which is vital in revealing the hidden information to better understand the brain functions and malfunctions. Brain data are routinely non‐linear and non‐stationary in nature, and existing algorithms and approaches to neural data processing are generally complicated in order to characterize the non‐linearity and non‐stationarity. Brain big data processing has pressing needs for appropriate computing technologies to address three grand challenges: (1) efficiency, (2) scalability and (3) reliability. Recent advances of computing technologies are making non‐linear methods viable in sophisticated applications of massive brain data processing. General‐purpose Computing on the Graphics Processing Unit (GPGPU) technology fosters an ideal environment for this purpose, which benefits from the tremendous computing power of modern graphics processing units in massively parallel architecture that is frequently an order of magnitude larger than the modern multi‐core CPUs. This article first recaps significant speed‐ups of existing algorithms aided by GPGPU in neuroimaging and processing electroencephalogram (EEG), functional magnetic resonance imaging (fMRI), magnetoencephalography (MEG) and etc. The article then demonstrates a series of successful approaches to processing EEG data in various dimensions and scales in a massively parallel manner: (1) decomposition: a massively parallel Ensemble Local Mean Decomposition (ELMD) algorithm aided by GPGPU can decompose EEG series, which forms the basis of further time‐frequency transformation, in real‐time without sacrificing the precision of processing; (2) synchronization measurement: a parallelized Nonlinear Interdependence (NLI) method for global synchronization measurement of multivariate EEG with speed‐up of more than 1000 times, and it was successful in localization of epileptic focus; and (3) dimensionality reduction: a large‐scale Parallel Factor Analysis which excels in run‐time performance and scales far better by hundreds of times than conventional approach does, and it supports fast factorization of EEG with more than 1000 channels. Through these practices, the massively parallel computing technology manifests great potentials in addressing the grand challenges of brain big data processing. Copyright © 2016 John Wiley & Sons, Ltd.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call