Abstract

Biometric signal based human-computer interface (HCI) has attracted increasing attention due to its wide application in healthcare, entertainment, neurocomputing, and so on. In recent years, deep learning-based approaches have made great progress on biometric signal processing. However, the state-of-the-art (SOTA) approaches still suffer from model degradation across subjects or sessions. In this work, we propose a novel unsupervised domain adaptation approach for biometric signal-based HCI via causal representation learning. Specifically, three kinds of interventions on biometric signals (i.e., subjects, sessions, and trials) can be selected to generalize deep models across the selected intervention. In the proposed approach, a generative model is trained for producing intervened features that are subsequently used for learning transferable and causal relations with three modes. Experiments on the EEG-based emotion recognition task and sEMG-based gesture recognition task are conducted to confirm the superiority of our approach. An improvement of +0.21% on the task of inter-subject EEG-based emotion recognition is achieved using our approach. Besides, on the task of inter-session sEMG-based gesture recognition, our approach achieves improvements of +1.47%, +3.36%, +1.71%, and +1.01% on sEMG datasets including CSL-HDEMG, CapgMyo DB-b, 3DC, and Ninapro DB6, respectively. The proposed approach also works on the task of inter-trial sEMG-based gesture recognition and an average improvement of +0.66% on Ninapro databases is achieved. These experimental results show the superiority of the proposed approach compared with the SOTA unsupervised domain adaptation methods on HCIs based on biometric signal.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call