Abstract

Dealing with subjects who are unable to attain a proper level of performance, that is, those with brain–computer interface (BCI) illiteracy or BCI inefficients, is still a major issue in human electroencephalography (EEG) BCI systems. The most suitable approach to address this issue is to analyze the EEG signals of individual subjects independently recorded before the main BCI tasks, to evaluate their performance on these tasks. This study mainly focused on non-linear analyses and deep learning techniques to investigate the significant relationship between the intrinsic characteristics of a prior idle resting state and the subsequent BCI performance. To achieve this main objective, a public EEG motor/movement imagery dataset that constituted two individual EEG signals recorded from an idle resting state and a motor imagery BCI task was used in this study. For the EEG processing in the prior resting state, spectral analysis but also non-linear analyses, such as sample entropy, permutation entropy, and recurrent quantification analyses (RQA), were performed to obtain individual groups of EEG features to represent intrinsic EEG characteristics in the subject. For the EEG signals in the BCI tasks, four individual decoding methods, as a filter-bank common spatial pattern-based classifier and three types of convolution neural network-based classifiers, quantified the subsequent BCI performance in the subject. Statistical linear regression and ANOVA with post hoc analyses verified the significant relationship between non-linear EEG features in the prior resting state and three types of BCI performance as low-, intermediate-, and high-performance groups that were statistically discriminated by the subsequent BCI performance. As a result, we found that the frontal theta rhythm ranging from 4 to 8 Hz during the eyes open condition was highly associated with the subsequent BCI performance. The RQA findings that higher determinism and lower mean recurrent time were mainly observed in higher-performance groups indicate that more regular and stable properties in the EEG signals over the frontal regions during the prior resting state would provide a critical clue to assess an individual BCI ability in the following motor imagery task.

Highlights

  • As a means of controlling external devices without real limb movement, motor imagery (MI)-based brain-computer interface (BCI) technology enables the translation of the user’s motor intentions into specific commands to perform the corresponding actions (Schalk et al, 2004; Lotte et al, 2018)

  • The groups were intentionally divided based on brain–computer interface (BCI) performance, which is indicated by the maximum accuracy rate among all decoding methods

  • We revealed that both spectral and recurrent quantification analyses (RQA) features extracted from the resting states in the eyes-open (REO) condition before the BCI task were highly correlated with the subsequent BCI performance

Read more

Summary

Introduction

As a means of controlling external devices without real limb movement, motor imagery (MI)-based brain-computer interface (BCI) technology enables the translation of the user’s motor intentions into specific commands to perform the corresponding actions (Schalk et al, 2004; Lotte et al, 2018). MI-BCI technology has been widely used in the neurorehabilitation system for subjects to recover their sensorimotor ability after stroke (Ang et al, 2010; Leamy et al, 2014), and as a ubiquitous system for healthy individuals to control external devices (Liao et al, 2012; Marshall et al, 2013; He et al, 2015; Kim et al, 2019). To deal with the problem of BCI illiteracy appropriately, previous studies have attempted to design more efficient BCI paradigms (Jeunet et al, 2016; Abiri et al, 2019) or to discriminate low-performance groups and others prior to the main BCI tasks (Bamdadian et al, 2014; Suk et al, 2014). If the individual degree of the subsequent BCI performance can be estimated, much time and resources can be saved (Blankertz et al, 2010; Ahn et al, 2013; Bamdadian et al, 2014; Suk et al, 2014; Kwon et al, 2020)

Objectives
Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.