Leveraging Cross-Subject Transfer Learning and Signal Augmentation for Enhanced RGB Color Decoding from EEG Data.
Decoding neural patterns for RGB colors from electroencephalography (EEG) signals is an important step towards advancing the use of visual features as input for brain-computer interfaces (BCIs). This study aims to overcome challenges such as inter-subject variability and limited data availability by investigating whether transfer learning and signal augmentation can improve decoding performance. This research introduces an approach that combines transfer learning for cross-subject information transfer and data augmentation to increase representational diversity in order to improve RGB color classification from EEG data. Deep learning models, including CNN-based DeepConvNet (DCN) and Adaptive Temporal Convolutional Network (ATCNet) using the attention mechanism, were pre-trained on subjects with representative brain responses and fine-tuned on target subjects to parse individual differences. Signal augmentation techniques such as frequency slice recombination and Gaussian noise addition improved model generalization by enriching the training dataset. The combined methodology yielded a classification accuracy of 83.5% for all subjects on the EEG dataset of 31 previously studied subjects. The improved accuracy and reduced variability underscore the effectiveness of transfer learning and signal augmentation in addressing data sparsity and variability, offering promising implications for EEG-based classification and BCI applications.
- Supplementary Content
227
- 10.3389/fncom.2019.00087
- Jan 21, 2020
- Frontiers in Computational Neuroscience
Brain computer interfaces (BCI) for the rehabilitation of motor impairments exploit sensorimotor rhythms (SMR) in the electroencephalogram (EEG). However, the neurophysiological processes underpinning the SMR often vary over time and across subjects. Inherent intra- and inter-subject variability causes covariate shift in data distributions that impede the transferability of model parameters amongst sessions/subjects. Transfer learning includes machine learning-based methods to compensate for inter-subject and inter-session (intra-subject) variability manifested in EEG-derived feature distributions as a covariate shift for BCI. Besides transfer learning approaches, recent studies have explored psychological and neurophysiological predictors as well as inter-subject associativity assessment, which may augment transfer learning in EEG-based BCI. Here, we highlight the importance of measuring inter-session/subject performance predictors for generalized BCI frameworks for both normal and motor-impaired people, reducing the necessity for tedious and annoying calibration sessions and BCI training.
- Conference Article
2
- 10.1109/iww-bci.2019.8737258
- Feb 1, 2019
Application of Brain Computer Interface (BCI) is revolutionizing control of prosthetic or exoskeleton devices directly through human thought. A BCI is expected to classify day-to-day life activities like grabbing and lifting a glass of water. Currently, motor imagery based BCI for two closely separated muscle groups like grabbing and lifting an object has not been studied. Challenge of classifying motor imagery of these activities accurately could be solved by using individual BCI. We proposed to achieve the same by using a neural network (machine learning) classifier on high resolution (129 channel) EEG data evaluated continuously every 80ms after spatial filtering using spherical Laplacian. This study employed a motor imagery based BCI optimized for individual subjects (n=28) using EEG data of actual movement for classifying motor imagery of grab, lift and grab+lift of right forearm. A three layered neural network with two output nodes was created for classifying the motor imagery using power of 8–14 Hz band of 500 ms EEG data. This BCI was able to classify motor imagery with 95.65% accuracy. In continuous evaluation, BCI showed a True Positive Rate of 24.89% and False Positive Rate of 12.93%. The percentage of correctly classified motor imagery in each trial was 84.99%, 72.23%, 17.07% for grab, lift and combined respectively. In conclusion, the current BCI was able to classify the motor imagery of grab, lift and grab+lift successfully based on EEG of movement data without any prior training of motor imagery based on last 500ms of data.
- Research Article
297
- 10.1109/tcbb.2021.3052811
- Jan 19, 2021
- IEEE/ACM Transactions on Computational Biology and Bioinformatics
Brain-Computer interfaces (BCIs) enhance the capability of human brain activities to interact with the environment. Recent advancements in technology and machine learning algorithms have increased interest in electroencephalographic (EEG)-based BCI applications. EEG-based intelligent BCI systems can facilitate continuous monitoring of fluctuations in human cognitive states under monotonous tasks, which is both beneficial for people in need of healthcare support and general researchers in different domain areas. In this review, we survey the recent literature on EEG signal sensing technologies and computational intelligence approaches in BCI applications, compensating for the gaps in the systematic summary of the past five years. Specifically, we first review the current status of BCI and signal sensing technologies for collecting reliable EEG signals. Then, we demonstrate state-of-the-art computational intelligence techniques, including fuzzy models and transfer learning in machine learning and deep learning algorithms, to detect, monitor, and maintain human cognitive states and task performance in prevalent applications. Finally, we present a couple of innovative BCI-inspired healthcare applications and discuss future research directions in EEG-based BCI research.
- Research Article
2
- 10.11834/jig.230031
- Jan 1, 2023
- Journal of Image and Graphics
A survey on encoding and decoding technology of non-invasive brain-computer interface
- Research Article
16
- 10.1186/s42490-024-00080-2
- May 2, 2024
- BMC Biomedical Engineering
Since their inception more than 50 years ago, Brain-Computer Interfaces (BCIs) have held promise to compensate for functions lost by people with disabilities through allowing direct communication between the brain and external devices. While research throughout the past decades has demonstrated the feasibility of BCI to act as a successful assistive technology, the widespread use of BCI outside the lab is still beyond reach. This can be attributed to a number of challenges that need to be addressed for BCI to be of practical use including limited data availability, limited temporal and spatial resolutions of brain signals recorded non-invasively and inter-subject variability. In addition, for a very long time, BCI development has been mainly confined to specific simple brain patterns, while developing other BCI applications relying on complex brain patterns has been proven infeasible. Generative Artificial Intelligence (GAI) has recently emerged as an artificial intelligence domain in which trained models can be used to generate new data with properties resembling that of available data. Given the enhancements observed in other domains that possess similar challenges to BCI development, GAI has been recently employed in a multitude of BCI development applications to generate synthetic brain activity; thereby, augmenting the recorded brain activity. Here, a brief review of the recent adoption of GAI techniques to overcome the aforementioned BCI challenges is provided demonstrating the enhancements achieved using GAI techniques in augmenting limited EEG data, enhancing the spatiotemporal resolution of recorded EEG data, enhancing cross-subject performance of BCI systems and implementing end-to-end BCI applications. GAI could represent the means by which BCI would be transformed into a prevalent assistive technology, thereby improving the quality of life of people with disabilities, and helping in adopting BCI as an emerging human-computer interaction technology for general use.
- Conference Article
16
- 10.1109/ner.2013.6695857
- Nov 1, 2013
In EEG-based motor imagery Brain-Computer interface (BCI), EEG data collected in the calibration phase is used as a subject-specific model to classify the EEG data in the evaluation phase. Previous study has shown the feasibility of calibrating EEG-based BCI from passive movement. This paper investigates the primary sensorimotor area activation from fNIRS on 4 subjects using multimodal NIRS and EEG-based BCI system while performing motor imagery and passive movement of the hand by a Haptic Knob robot. NIRS_SPM is used to compute the changes in hemoglobin response and to generate brain activation map based on the contrasts of motor imagery versus idle and passive movement versus idle. The results on the contrasts showed that passive movement versus idle yielded significant differences compared to motor imagery versus idle. In addition, the results of classifying the NIRS and EEG data separately also showed that the accuracies on classifying passive movement versus idle are better than that of motor imagery versus idle. The results suggest a potential of using passive movement data to calibrate motor imagery in a multimodal NIRS and EEG-based BCI.
- Book Chapter
23
- 10.1007/978-3-319-58628-1_4
- Jan 1, 2017
Transfer learning (TL) has gained significant interests recently in brain computer interface (BCI) as a key approach to design robust predictors for cross-subject and cross-experiment prediction of the brain activities in response to cognitive events. We carried out in this.aper the first comprehensive investigation of the transferability of deep convolutional neural network (CNN) for cross-subject and cross-experiment prediction of image Rapid Serial Visual Presentation (RSVP) events. We show that for both cross-subject and cross-experiment predictions, all convolutional layers and fully connected layers contain both general and subject/experiment-specific features and transfer learning with weights fine-tuning can improve the prediction performance over that without transfer. However, for cross-subject prediction, the convolutional layers capture more subject-specific features, whereas for cross-experiment prediction, the convolutional layers capture more general features across experiment. Our study provides important information that will guide the design of more sophisticated deep transfer learning algorithms for EEG based classifications in BCI applications.
- Conference Article
6
- 10.1109/ccoms.2019.8821739
- Feb 1, 2019
In EEG-based Brain-Computer Interface (BCI) applications, the EEG recording is often contaminated by different types of artifacts that can misinterpret the BCI output. Automatic detection and removal of such offending artifacts from EEG for online processing pose a great challenge. In this paper, we present a novel method that can map the artifact probability of an EEG epoch based on four statistical measures: entropy, kurtosis, skewness and Periodic Waveform Index PWI). Then a removal method is adopted based on stationary wavelet transform that can be applied to the epochs by setting a particular probability threshold from the user. This epoch by epoch preprocessing would allow the user to tune the threshold parameters after some initial training with the same EEG recordings and eventually can be applied to both offline and online processing. Experimental results with both simulated and real EEG data prove the efficacy of the method that it can reliably trace the artifactual epoch with reasonable accuracy and eventually reduces the artifacts from EEG with very little distortion to the signal of interest. Further testing with EEG datasets for BCI experiments also shows that artifact removal can significantly enhance the BCI performance in both motor-imagery (MI) and event related potential (ERP) based BCI applications.
- Research Article
32
- 10.1109/tnsre.2023.3259730
- Jan 1, 2023
- IEEE Transactions on Neural Systems and Rehabilitation Engineering
In recent years, deep neural network-based transfer learning (TL) has shown outstanding performance in EEG-based motor imagery (MI) brain-computer interface (BCI). However, due to the long preparation for pre-trained models and the arbitrariness of source domain selection, using deep transfer learning on different datasets and models is still challenging. In this paper, we proposed a multi-direction transfer learning (MDTL) strategy for cross-subject MI EEG-based BCI. This strategy utilizes data from multi-source domains to the target domain as well as from one multi-source domain to another multi-source domain. This strategy is model-independent so that it can be quickly deployed on existing models. Three generic deep learning models for MI classification (DeepConvNet, ShallowConvNet, and EEGNet) and two public motor imagery datasets (BCIC IV dataset 2a and Lee2019) are used in this study to verify the proposed strategy. For the four-classes dataset BCIC IV dataset 2a, the proposed MDTL achieves 80.86%, 81.95%, and 75.00% mean prediction accuracy using the three models, which outperforms those without MDTL by 5.79%, 6.64%, and 11.42%. For the binary-classes dataset Lee2019, MDTL achieves 88.2% mean accuracy using the model DeepConvNet. It outperforms the accuracy without MDTL by 23.48%. The achieved 81.95% and 88.2% are also better than the existing deep transfer learning strategy. Besides, the training time of MDTL is reduced by 93.94%. MDTL is an easy-to-deploy, scalable and reliable transfer learning strategy for existing deep learning models, which significantly improves model performance and reduces preparation time without changing model architecture.
- Conference Article
29
- 10.1109/cic.2016.026
- Nov 1, 2016
In recent years, Brain-Computer Interfaces (BCIs) have gained popularity in non-medical domains such as the gaming, entertainment, personal health, and marketing industries. A growing number of companies offer various inexpensive consumer grade BCIs and some of these companies have recently introduced the concept of BCI "App stores" in order to facilitate the expansion of BCI applications and provide software development kits (SDKs) for other developers to create new applications for their devices. The BCI applications access to users' unique brainwave signals, which consequently allows them to make inferences about users' thoughts and mental processes. Since there are no specific standards that govern the development of BCI applications, its users are at the risk of privacy breaches. In this work, we perform first comprehensive analysis of BCI App stores including software development kits (SDKs), application programming interfaces (APIs), and BCI applications w.r.t privacy issues. The goal is to understand the way brainwave signals are handled by BCI applications and what threats to the privacy of users exist. Our findings show that most applications have unrestricted access to users' brainwave signals and can easily extract private information about their users without them even noticing. We discuss potential privacy threats posed by current practices used in BCI App stores and then describe some countermeasures that could be used to mitigate the privacy threats.
- Book Chapter
- 10.1007/978-981-10-4741-1_18
- Nov 17, 2017
Electroencephalogram (EEG) is the most convenient method for recording the electrical activities of the brain, for Brain Computer Interface (BCI) applications. This EEG data is notoriously noisy. A variety of frequency estimation techniques are used in feature extraction . This is possible due to the presence of information of interest in frequency bands which are well defined. The application of EMD (Empirical Mode Decomposition) on the recorded EEG waves of subjects’, renders time-frequency data depicting instantaneous frequencies. EMD is chosen to obtain Hilbert–Huang Transform (HHT) of the data which is chosen over Fourier Transform (FT) owing to the nonstationarity, closely spaced frequency bands of interest and low SNR of the recorded data. HHT of the data can be used to obtain a feature or signature, which can be used as a command signal for various BCI applications.
- Research Article
9
- 10.1088/1741-2552/ad152f
- Jan 17, 2024
- Journal of Neural Engineering
Transferring a deep learning model from healthy subjects to stroke patients in a motor imagery brain–computer interface
- Research Article
1
- 10.1038/s41598-025-07088-1
- Jul 4, 2025
- Scientific Reports
With the increasing integration of artificial intelligence (AI) in several scientific domains, there is a rising demand for advanced AI tools capable of addressing advanced research challenges. A challenge of paramount importance lies in accurately predicting the streamflow within river basins. Effective river flow prediction holds significant relevance, particularly given the substantial societal implications of river usage, encompassing areas such as transportation, agriculture, and power generation. The present study introduces a novel approach to streamflow prediction involving the development of a Deep Learning (DL) model that combines a convolutional neural network with Transfer Learning (TL) techniques to predict streamflow in river systems. With the aim of training the developed DL model, the study employed a time-series dataset containing hydrological data related to two distinct river basins, i.e., Paraíba do Sul, in Brazil, and Zambezi in the state of Mozambique. The developed DL models exhibited the capability to effectively predict the river flow with a one-day horizon, relying on the preceding three or seven days of historical data. To overcome the limited availability of training data and reduce the training time of DL models, TL was leveraged to incorporate two additional distinct time-series datasets, i.e., historical streamflow data from the São Francisco River in Brazil, and climate data from Delhi, India. The application of TL significantly reduced training time, leading only to a minimal decrease in prediction performance. Indeed, in the case DL models were trained on data collected from the Paraíba do Sul River, a substantial reduction in training time was observed - up to 27% - with a modest percentage decrease of 0.31% in test predictive performance (). Similarly, TL induced a significant reduction in training time of up to 48%, while resulting in a modest 2% reduction in test predictive performance () for the Zambezi dataset. The findings underscore the significance of TL as a strategic and viable approach to improve the efficiency of river flow prediction models in the context of basins with limited hydrological data available.
- Book Chapter
37
- 10.1049/pbce114e_ch5
- Sep 10, 2018
One of the major limitations of brain-computer interface (BCI) is its long calibration time. Typically, a big amount of training data needs to be collected at the beginning of each session in order to tune the parameters of the system for the target user due to between sessions/subjects non-stationarity. To mitigate this limitation, transfer learning can be potentially one useful solution. Transfer learning extracts information from different domains (raw data, features, or classification domain) to compensate the lack of labelled data from the test subject. Within this chapter transfer learning definitions and techniques are fully explained.After that, some of the available transfer learning applications in BCI are explored. Then, a brief discussion about applying transfer learning in the different domains is included. The discussion shows that despite some advances, a successful transfer learning framework for BCI still needs to be developed. Finally, future research directions in this topic are suggested in order to successfully and reliably reduce the calibration time for new subjects and increase the accuracy of the system.
- Research Article
39
- 10.3389/fnhum.2021.643386
- May 28, 2021
- Frontiers in Human Neuroscience
Brain–computer interfaces (BCIs) utilizing machine learning techniques are an emerging technology that enables a communication pathway between a user and an external system, such as a computer. Owing to its practicality, electroencephalography (EEG) is one of the most widely used measurements for BCI. However, EEG has complex patterns and EEG-based BCIs mostly involve a cost/time-consuming calibration phase; thus, acquiring sufficient EEG data is rarely possible. Recently, deep learning (DL) has had a theoretical/practical impact on BCI research because of its use in learning representations of complex patterns inherent in EEG. Moreover, algorithmic advances in DL facilitate short/zero-calibration in BCI, thereby suppressing the data acquisition phase. Those advancements include data augmentation (DA), increasing the number of training samples without acquiring additional data, and transfer learning (TL), taking advantage of representative knowledge obtained from one dataset to address the so-called data insufficiency problem in other datasets. In this study, we review DL-based short/zero-calibration methods for BCI. Further, we elaborate methodological/algorithmic trends, highlight intriguing approaches in the literature, and discuss directions for further research. In particular, we search for generative model-based and geometric manipulation-based DA methods. Additionally, we categorize TL techniques in DL-based BCIs into explicit and implicit methods. Our systematization reveals advances in the DA and TL methods. Among the studies reviewed herein, ~45% of DA studies used generative model-based techniques, whereas ~45% of TL studies used explicit knowledge transferring strategy. Moreover, based on our literature review, we recommend an appropriate DA strategy for DL-based BCIs and discuss trends of TLs used in DL-based BCIs.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.