Abstract

Recent advances in imagined speech recognition from EEG signals have shown their capability of enabling a new natural form of communication, which is posed to improve the lives of subjects with motor disabilities. However, differences among subjects may be an obstacle to the applicability of a previously trained classifier to new users, since a significant amount of labeled samples must be acquired for each new user, making this process tedious and time-consuming. In this sense, unsupervised domain adaptation (UDA) methods, especially those based on deep learning (D-UDA), arise as a potential solution to address this issue by reducing the differences among feature distributions of subjects. It has been shown that the divergence in the marginal and conditional distributions must be reduced to encourage similar feature distributions. However, current D-UDA methods may become sensitive under adaptation scenarios where a low discriminative feature space among classes is given, reducing the accuracy performance of the classifier. To address this issue, we introduce a D-UDA method, named Standardization-Refinement Domain Adaptation (SRDA), which combines Adaptive Batch Normalization (AdaBN) with a novel loss function based on the variation of information (VOI), in order to build an adaptive classifier on EEG data corresponding to imagined speech. Our proposal, applied over two imagined speech datasets, resulted in SRDA outperforming standard classifiers for BCI and existing D-UDA methods, achieving accuracy performances of 61.02±08.14% and 62.99±04.78%, assessed using leave-one-out cross-validation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call