Abstract

Neural codes are reflected in complex neural activation patterns. Conventional electroencephalography (EEG) decoding analyses summarize activations by averaging/down-sampling signals within the analysis window. This diminishes informative fine-grained patterns. While previous studies have proposed distinct statistical features capable of capturing variability-dependent neural codes, it has been suggested that the brain could use a combination of encoding protocols not reflected in any one mathematical feature alone. To check, we combined 30 features using state-of-the-art supervised and unsupervised feature selection procedures (n = 17). Across three datasets, we compared decoding of visual object category between these 17 sets of combined features, and between combined and individual features. Object category could be robustly decoded using the combined features from all of the 17 algorithms. However, the combination of features, which were equalized in dimension to the individual features, were outperformed across most of the time points by the multiscale feature of Wavelet coefficients. Moreover, the Wavelet coefficients also explained the behavioral performance more accurately than the combined features. These results suggest that a single but multiscale encoding protocol may capture the EEG neural codes better than any combination of protocols. Our findings put new constraints on the models of neural information encoding in EEG.

Highlights

  • Abstract models of feed-forward visual processing suggest that visual sensory information enters the brain through retina, reaches the lateral geniculate nucleus in thalamus and continues to early visual cortices before moving forward to reach the anterior parts of the inferior temporal cortices where semantic information is extracted from the visual inputs (DiCarlo et al, 2012)

  • There is evidence that EEG activations represent the information in a feature different [e.g., phase rather than the amplitude of slow oscillations] from the invasive neural data such as spiking activity (Ng et al, 2013)

  • To gain a better understanding of EEG, previous studies have extracted a wide variety of features of neural activations to extract information about visual object categories

Read more

Summary

Introduction

How is information about the world encoded by the human brain? Researchers have tried to answer this question using variety of brain imaging techniques across all sensory modalities. While majority of EEG and MEG decoding studies still rely on the within-trial “mean” of activity (average of activation level within the sliding analysis window) as the main source of information (Grootswagers et al, 2017; KarimiRouzbahani et al, 2017b), recent theoretical and experimental studies have shown evidence that temporal variabilities of neural activity (sample to sample changes in the level of activity) form an additional channel of information encoding (Orbán et al, 2016) These temporal variabilities have provided information about the “complexity,” “uncertainty,” and the “variance” of the visual stimulus, which correlated with the semantic category of the presented image (Hermundstad et al, 2014; Orbán et al, 2016; Garrett et al, 2020). It is clear that neural variabilities carry significant amounts of information about different aspects of sensory processing and may play a major role in determining behavior (Waschke et al, 2021)

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call