Abstract

Traditional individual identification methods, such as face and fingerprint recognition, carry the risk of personal information leakage. The uniqueness and privacy of electroencephalograms (EEG) and the popularization of EEG acquisition devices have intensified research on EEG-based individual identification in recent years. However, most existing work uses EEG signals from a single session or emotion, ignoring large differences between domains. As EEG signals do not satisfy the traditional deep learning assumption that training and test sets are independently and identically distributed, it is difficult for trained models to maintain good classification performance for new sessions or new emotions. In this article, an individual identification method, called Multi-Loss Domain Adaptor (MLDA), is proposed to deal with the differences between marginal and conditional distributions elicited by different domains. The proposed method consists of four parts: a) Feature extractor, which uses deep neural networks to extract deep features from EEG data; b) Label predictor, which uses full-layer networks to predict subject labels; c) Marginal distribution adaptation, which uses maximum mean discrepancy (MMD) to reduce marginal distribution differences; d) Associative domain adaptation, which adapts to conditional distribution differences. Using the MLDA method, the cross-session and cross-emotion EEG-based individual identification problem is addressed by reducing the influence of time and emotion. Experimental results confirmed that the method outperforms other state-of-the-art approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call