Multi-source domain separation adversarial domain adaptation for EEG emotion recognition

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Multi-source domain separation adversarial domain adaptation for EEG emotion recognition

Similar Papers
  • Research Article
  • Cite Count Icon 22
  • 10.1016/j.neucom.2020.12.046
Robust adversarial discriminative domain adaptation for real-world cross-domain visual recognition
  • Jan 2, 2021
  • Neurocomputing
  • Jianfei Yang + 3 more

Robust adversarial discriminative domain adaptation for real-world cross-domain visual recognition

  • Research Article
  • Cite Count Icon 1
  • 10.1016/j.bspc.2024.106953
MSS-JDA: Multi-Source Self-Selected Joint Domain Adaptation method based on cross-subject EEG emotion recognition
  • Sep 27, 2024
  • Biomedical Signal Processing and Control
  • Shinan Chen + 3 more

MSS-JDA: Multi-Source Self-Selected Joint Domain Adaptation method based on cross-subject EEG emotion recognition

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 4
  • 10.3390/brainsci13091326
Cross-Sensory EEG Emotion Recognition with Filter Bank Riemannian Feature and Adversarial Domain Adaptation.
  • Sep 14, 2023
  • Brain Sciences
  • Chenguang Gao + 2 more

Emotion recognition is crucial in understanding human affective states with various applications. Electroencephalography (EEG)-a non-invasive neuroimaging technique that captures brain activity-has gained attention in emotion recognition. However, existing EEG-based emotion recognition systems are limited to specific sensory modalities, hindering their applicability. Our study innovates EEG emotion recognition, offering a comprehensive framework for overcoming sensory-focused limits and cross-sensory challenges. We collected cross-sensory emotion EEG data using multimodal emotion simulations (three sensory modalities: audio/visual/audio-visual with two emotion states: pleasure or unpleasure). The proposed framework-filter bank adversarial domain adaptation Riemann method (FBADR)-leverages filter bank techniques and Riemannian tangent space methods for feature extraction from cross-sensory EEG data. Compared with Riemannian methods, filter bank and adversarial domain adaptation could improve average accuracy by 13.68% and 8.36%, respectively. Comparative analysis of classification results proved that the proposed FBADR framework achieved a state-of-the-art cross-sensory emotion recognition performance and reached an average accuracy of 89.01% ± 5.06%. Moreover, the robustness of the proposed methods could ensure high cross-sensory recognition performance under a signal-to-noise ratio (SNR) ≥ 1 dB. Overall, our study contributes to the EEG-based emotion recognition field by providing a comprehensive framework that overcomes limitations of sensory-oriented approaches and successfully tackles the difficulties of cross-sensory situations.

  • Research Article
  • 10.3390/info16070560
CMHFE-DAN: A Transformer-Based Feature Extractor with Domain Adaptation for EEG-Based Emotion Recognition
  • Jun 30, 2025
  • Information
  • Manal Hilali + 2 more

EEG-based emotion recognition (EEG-ER) through deep learning models has gained more attention in recent years, with more researchers focusing on architecture, feature extraction, and generalisability. This paper presents a novel end-to-end deep learning framework for EEG-ER, combining temporal feature extraction, self-attention mechanisms, and adversarial domain adaptation. The architecture entails a multi-stage 1D CNN for spatiotemporal features from raw EEG signals, followed by a transformer-based attention module for long-range dependencies, and a domain-adversarial neural network (DANN) module with gradient reversal to enable a powerful subject-independent generalisation by learning domain-invariant features. Experiments on benchmark datasets (DEAP, SEED, DREAMER) demonstrate that our approach achieves a state-of-the-art performance, with a significant improvement in cross-subject recognition accuracy compared to non-adaptive frameworks. The architecture tackles key challenges in EEG emotion recognition, including generalisability, inter-subject variability, and temporal dynamics modelling. The results highlight the effectiveness of combining convolutional feature learning with adversarial domain adaptation for robust EEG-ER.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 42
  • 10.1007/s11063-022-10977-5
A Survey on Adversarial Domain Adaptation
  • Aug 13, 2022
  • Neural Processing Letters
  • Mahta Hassanpour Zonoozi + 1 more

Having a lot of labeled data is always a problem in machine learning issues. Even by collecting lots of data hardly, shift in data distribution might emerge because of differences in source and target domains. The shift would make the model to face with problems in test step. Therefore, the necessity of using domain adaptation emerges. There are three techniques in the field of domain adaptation namely discrepancy based, adversarial based and reconstruction based methods. For domain adaptation, adversarial learning approaches showed state-of-the-art performance. Although there are some comprehensive surveys about domain adaptation, we technically focus on adversarial based domain adaptation methods. We examine each proposed method in detail with respect to their structures and objective functions. The common aspect of proposed methods besides domain adaptation is considering the target labels are predicted as accurately as possible. It can be represented by some methods such as metric learning and multi-adversarial discriminators as are used in some of the papers. Also, we address the negative transfer issue for dissimilar distributions and propose the addition of clustering heuristics to the underlying structures for future research.

  • Research Article
  • 10.3389/fnhum.2024.1464431
Multi-source domain adaptation for EEG emotion recognition based on inter-domain sample hybridization.
  • Oct 31, 2024
  • Frontiers in human neuroscience
  • Xu Wu + 4 more

Electroencephalogram (EEG) is widely used in emotion recognition due to its precision and reliability. However, the nonstationarity of EEG signals causes significant differences between individuals or sessions, making it challenging to construct a robust model. Recently, domain adaptation (DA) methods have shown excellent results in cross-subject EEG emotion recognition by aligning marginal distributions. Nevertheless, these methods do not consider emotion category labels, which can lead to label confusion during alignment. Our study aims to alleviate this problem by promoting conditional distribution alignment during domain adaptation to improve cross-subject and cross-session emotion recognition performance. This study introduces a multi-source domain adaptation common-branch network for EEG emotion recognition and proposes a novel sample hybridization method. This method enables the introduction of target domain data information by directionally hybridizing source and target domain samples without increasing the overall sample size, thereby enhancing the effectiveness of conditional distribution alignment in domain adaptation. Cross-subject and cross-session experiments were conducted on two publicly available datasets, SEED and SEED-IV, to validate the proposed model. In cross-subject emotion recognition, our method achieved an average accuracy of 90.27% on the SEED dataset, with eight out of 15 subjects attaining a recognition accuracy higher than 90%. For the SEED-IV dataset, the recognition accuracy also reached 73.21%. Additionally, in the cross-session experiment, we sequentially used two out of the three session data as source domains and the remaining session as the target domain for emotion recognition. The proposed model yielded average accuracies of 94.16 and 75.05% on the two datasets, respectively. Our proposed method aims to alleviate the difficulties of emotion recognition from the limited generalization ability of EEG features across subjects and sessions. Though adapting the multi-source domain adaptation and the sample hybridization method, the proposed method can effectively transfer the emotion-related knowledge of known subjects and achieve accurate emotion recognition on unlabeled subjects.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 9
  • 10.1038/s41698-024-00652-4
Learning generalizable AI models for multi-center histopathology image classification
  • Jul 19, 2024
  • npj Precision Oncology
  • Maryam Asadi-Aghbolaghi + 14 more

Investigation of histopathology slides by pathologists is an indispensable component of the routine diagnosis of cancer. Artificial intelligence (AI) has the potential to enhance diagnostic accuracy, improve efficiency, and patient outcomes in clinical pathology. However, variations in tissue preparation, staining protocols, and histopathology slide digitization could result in over-fitting of deep learning models when trained on the data from only one center, thereby underscoring the necessity to generalize deep learning networks for multi-center use. Several techniques, including the use of grayscale images, color normalization techniques, and Adversarial Domain Adaptation (ADA) have been suggested to generalize deep learning algorithms, but there are limitations to their effectiveness and discriminability. Convolutional Neural Networks (CNNs) exhibit higher sensitivity to variations in the amplitude spectrum, whereas humans predominantly rely on phase-related components for object recognition. As such, we propose Adversarial fourIer-based Domain Adaptation (AIDA) which applies the advantages of a Fourier transform in adversarial domain adaptation. We conducted a comprehensive examination of subtype classification tasks in four cancers, incorporating cases from multiple medical centers. Specifically, the datasets included multi-center data for 1113 ovarian cancer cases, 247 pleural cancer cases, 422 bladder cancer cases, and 482 breast cancer cases. Our proposed approach significantly improved performance, achieving superior classification results in the target domain, surpassing the baseline, color augmentation and normalization techniques, and ADA. Furthermore, extensive pathologist reviews suggested that our proposed approach, AIDA, successfully identifies known histotype-specific features. This superior performance highlights AIDA’s potential in addressing generalization challenges in deep learning models for multi-center histopathology datasets.

  • Conference Article
  • Cite Count Icon 5
  • 10.1109/icassp43922.2022.9747532
Attention-based Adversarial Partial Domain Adaptation
  • May 23, 2022
  • Mengzhu Wang + 6 more

With the rapid development of vision-based deep learning (DL), it is an effective method to generate large-scale synthetic data to supplement real data to train the DL models for domain adaptation. However, previous vanilla domain adaptation methods generally assume the same label space, and such an assumption is no longer valid for a more realistic scenario where it requires adaptation from a larger and more diverse source domain to a smaller target domain with less number of classes. To handle this problem, we propose an attention-based adversarial partial domain adaptation (AADA). Specifically, we leverage adversarial domain adaptation to augment the target domain by using source domain, then we can readily turn this task into a vanilla domain adaptation. Meanwhile, to accurately focus on the transferable features, we apply attention-based method to train the adversarial networks to obtain better transferable semantic features. Experiments on four benchmarks demonstrate that the proposed method outperforms existing methods by a large margin, especially on the tough domain adaptation tasks, e.g. VisDA-2017.

  • Research Article
  • Cite Count Icon 9
  • 10.1007/s10489-022-04288-4
Semi-supervised adversarial discriminative domain adaptation.
  • Nov 29, 2022
  • Applied Intelligence
  • Thai-Vu Nguyen + 3 more

Domain adaptation is a potential method to train a powerful deep neural network across various datasets. More precisely, domain adaptation methods train the model on training data and test that model on a completely separate dataset. The adversarial-based adaptation method became popular among other domain adaptation methods. Relying on the idea of GAN, the adversarial-based domain adaptation tries to minimize the distribution between the training and testing dataset based on the adversarial learning process. We observe that the semi-supervised learning approach can combine with the adversarial-based method to solve the domain adaptation problem. In this paper, we propose an improved adversarial domain adaptation method called Semi-Supervised Adversarial Discriminative Domain Adaptation (SADDA), which can outperform other prior domain adaptation methods. We also show that SADDA has a wide range of applications and illustrate the promise of our method for image classification and sentiment classification problems.

  • Conference Article
  • Cite Count Icon 21
  • 10.1145/3474085.3475481
InterBN: Channel Fusion for Adversarial Unsupervised Domain Adaptation
  • Oct 17, 2021
  • Mengzhu Wang + 8 more

A classifier trained on one dataset rarely works on other datasets obtained under different conditions because of domain shifting. Such a problem is usually solved by domain adaptation methods. In this paper, we propose a novel unsupervised domain adaptation (UDA) method based on Interchangeable Batch Normalization (InterBN) to fuse different channels in deep neural networks for adversarial domain adaptation.Specifically, we first observe that the channels with small batch normalization scaling factor have less influence on the whole domain adaption, followed by a theoretical proof that the scaling factors for some channels will definitely come close to zero when imposing a sparsity regularization. Then, we replace the channels that have smaller scaling factors in the source domain with the mean of the channels which have larger scaling factors in the target domain or vice versa. Such a simple but effective channel fusion scheme can drastically increase the domain adaption ability.Extensive experimental results show that our InterBN significantly outperforms the current adversarial domain adaptation methods by a large margin on four visual benchmarks. In particular, InterBN achieves a remarkable improvement of 7.7% over the conditional adversarial adaptation networks (CDAN) on VisDA-2017 benchmark.

  • Research Article
  • Cite Count Icon 4
  • 10.1016/j.biosystemseng.2022.10.016
Deep adversarial domain adaptation for hyperspectral calibration model transfer among plant phenotyping systems
  • Nov 3, 2022
  • Biosystems Engineering
  • Tanzeel U Rehman + 1 more

Deep adversarial domain adaptation for hyperspectral calibration model transfer among plant phenotyping systems

  • Conference Article
  • Cite Count Icon 7
  • 10.1109/icpr56361.2022.9956207
Cross-session Specific Emitter Identification using Adversarial Domain Adaptation with Wasserstein distance
  • Aug 21, 2022
  • Yalan Ye + 4 more

Accurate and robust specific emitter identification (SEI) is very challenging since distribution shift of signals occurs in cross-session scenario. General domain adaptation (DA) is proposed to alleviate the shift by aligning different signal distributions. However, existing general-DA based SEI methods which focus on the shift in the same session cannot be directly applied to cross-session SEI, since the distribution of signals varies more drastically in different sessions due to the continuously changing hardware imperfections. In this paper, we propose a novel method named adversarial domain adaptation with wasserstein distance (ADAW) to tackle the cross-session SEI. Specifically, to alleviate the severer distribution shift of signals in different sessions, a generative model is applied to map the data of previous session to latter session regardless of the degree of radio frequency fingerprints (RFFs) variations. Then, a wasserstein distance guided adversarial unsupervised domain adaptation (UDA) strategy is introduced to learn common feature representations for signals of different sessions, such that the model trained on the signals of previous session can precisely identify the signals of latter session. Experiments on ADS-B signals of same emitters in three distinct time sessions validate the capability of ADAW for SEI under cross-session and noisy conditions.

  • Conference Article
  • Cite Count Icon 21
  • 10.1109/mlsp.2017.8168121
Adversarial domain separation and adaptation
  • Sep 1, 2017
  • Jen-Chieh Tsai + 1 more

Traditional domain adaptation methods attempted to learn the shared representation for distribution matching between source domain and target domain where the individual information in both domains was not characterized. Such a solution suffers from the mixing problem of individual information with the shared features which considerably constrains the performance for domain adaptation. To relax this constraint, it is crucial to extract both shared information and individual information. This study captures both information via a new domain separation network where the shared features are extracted and purified via separate modeling of individual information in both domains. In particular, a hybrid adversarial learning is incorporated in a separation network as well as an adaptation network where the associated discriminators are jointly trained for domain separation and adaptation according to the minmax optimization over separation loss and domain discrepancy, respectively. Experiments on different tasks show the merit of using the proposed adversarial domain separation and adaptation.

  • Conference Article
  • Cite Count Icon 14
  • 10.1109/bibm52615.2021.9669542
Cross-Subject EEG Emotion Recognition Using Domain Adaptive Few-Shot Learning Networks
  • Dec 9, 2021
  • Run Ning + 2 more

Due to the individual differences and nonstationary of EEG signals, it is difficult to classify EEG emotions with traditional machine methods, which assume that the training and testing set come from the same data distribution, but this assumption is usually not true in the EEG field, therefore the accuracy of emotion recognition is very poor. In this paper, a Single-Source Domain Adaptive Few-Shot Learning Networks (SDA-FSL) was proposed for cross-subject EEG emotion recognition. This is the first time that domain adaptation method with few-shot learning has been used in the field of EEG emotion recognition. A CBAM-based feature mapping module was designed to extract the common features of the two domains, and the domain adaptation module was used to align the data distribution of two domains. In addition, Prototypical Networks with instance-attention mechanism is introduced to preserve domain-specific information. The proposed method was evaluated on DEAP and SEED datasets in within-dataset and cross-dataset experiments under various N-way k-shot settings. Experimental results show that the performance of SDA-FSL outperforms other comparison methods and has superior generalization performance on cross-dataset experiments.

  • Book Chapter
  • Cite Count Icon 11
  • 10.1007/978-3-030-32226-7_63
Cone-Beam Computed Tomography (CBCT) Segmentation by Adversarial Learning Domain Adaptation
  • Jan 1, 2019
  • Xiaoqian Jia + 9 more

Cone-beam computed tomography (CBCT) is increasingly used in radiotherapy for patient alignment and adaptive therapy where organ segmentation and target delineation are often required. However, due to the poor image quality, low soft tissue contrast, as well as the difficulty in acquiring segmentation labels on CBCT images, developing effective segmentation methods on CBCT has been a challenge. In this paper, we propose a deep model for segmenting organs in CBCT images without requiring labelled training CBCT images. By taking advantage of the available segmented computed tomography (CT) images, our adversarial learning domain adaptation method aims to synthesize CBCT images from CT images. Then the segmentation labels of the CT images can help train a deep segmentation network for CBCT images, using both CTs with labels and CBCTs without labels. Our adversarial learning domain adaptation is integrated with the CBCT segmentation network training with the designed loss functions. The synthesized CBCT images by pixel-level domain adaptation best capture the critical image features that help achieve accurate CBCT segmentation. Our experiments on the bladder images from Radiation Oncology clinics have shown that our CBCT segmentation with adversarial learning domain adaptation significantly improves segmentation accuracy compared to the existing methods without doing domain adaptation from CT to CBCT.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon