Abstract

Resting-state magnetoencephalography (MEG) data show complex but structured spatiotemporal patterns. However, the neurophysiological basis of these signal patterns is not fully known and the underlying signal sources are mixed in MEG measurements. Here, we developed a method based on the nonlinear independent component analysis (ICA), a generative model trainable with unsupervised learning, to learn representations from resting-state MEG data. After being trained with a large dataset from the Cam-CAN repository, the model has learned to represent and generate patterns of spontaneous cortical activity using latent nonlinear components, which reflects principal cortical patterns with specific spectral modes. When applied to the downstream classification task of audio-visual MEG, the nonlinear ICA model achieves competitive performance with deep neural networks despite limited access to labels. We further validate the generalizability of the model across different datasets by applying it to an independent neurofeedback dataset for decoding the subject's attentional states, providing a real-time feature extraction and decoding mindfulness and thought-inducing tasks with an accuracy of around 70% at the individual level, which is much higher than obtained by linear ICA or other baseline methods. Our results demonstrate that nonlinear ICA is a valuable addition to existing tools, particularly suited for unsupervised representation learning of spontaneous MEG activity which can then be applied to specific goals or tasks when labelled data are scarce.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call