Abstract

This article mainly studies Electroencephalogram (EEG) mental recognition. Because the human brain is very complex and the EEG signal is greatly affected by the environment, EEG mental recognition can be attributed to domain adaptative problems. Our main work is as follows: (1) At present, most domain adaptation learning only learns the linear subspace of Reproducing Kernel Hilbert Space (RKHS), and RKHS itself does not. Given the complexity and nonlinearity of EEG mental recognition, we propose an EEG mental recognition algorithm based on two learning. The two learning is RKHS learning and RKHS subspace learning. The source dictionary regularized RKHS subspace learning we proposed applies to EEG mental recognition and is better than pure RKHS subspace learning, but not enough. To get satisfactory results, we learn RKHS before RKHS subspace. (2) According to Moore–Aronszajn theorem, RKHS can be uniquely generated by a kernel function. The existing RKHS is rarely learnable. It is difficult to find a kernel function that can be learned and optimized. In RKHS learning, this paper uses a learnable kernel function that we published, and the kernel function is easy to optimize. (3) In RKHS subspace learning, most of the existing methods adopt the Maximum Mean Discrepancy (MMD) criterion, but it cannot make the spatial distribution of the same category of source and target domain data overlap as much as possible, and the label of the target domain data is unknown. To solve this problem, this paper proposes a framework of RKHS subspace learning based on source domain dictionary regularization. The experimental results on the brain-computer interface international competition data set (BCI competition IV 2a) show that the effect of the algorithm proposed in this paper is better than that of the other five advanced domain adaptation learning algorithms.

Highlights

  • T RANSFER Learning is a machine learning method that uses existing knowledge to solve different but related domains or tasks [2] [3]

  • (3) In Reproducing Kernel Hilbert Space (RKHS) subspace learning, most of the existing methods adopt the Maximum Mean Discrepancy (MMD) criterion, but it cannot make the spatial distribution of the same category of source and target domain data overlap as much as possible, and the label of the target domain data is unknown

  • 3) In RKHS subspace learning, most of the existing methods use the MMD criterion, but it cannot make the spatial distribution of the same category of source and target domain data overlap

Read more

Summary

Introduction

T RANSFER Learning is a machine learning method that uses existing knowledge to solve different but related domains or tasks [2] [3]. We can treat data samples with the same feature space and marginal distribution as the same domain, and the same task represents the label space and posterior of the two tasks. The conditional probability distribution is consistent.Domain adaptive learning is a kind of transfer learning, which transfers data features. Its research on how to use labeled source domain data and prior knowledge of the target domain to reliably learn and complete tasks in the target domain when the probability distributions of the source and target domain are different but related. Shekhar et al [24] [25] represent the source domain and the target domain data respectively, through the shared dictionary in the latent subspace.Domainspecific dictionary learning [26] [27]is to learn a dictionary for each domain, and use domain-specific or domain-

Objectives
Methods
Findings
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call