Abstract

Existing domain adaptation methods for cross-subject emotion recognition are primarily focused on accuracy and suffer from the issues of intensive hyperparameter tunings and high computational complexity. In this paper, we make the first attempt to address these issues by developing a domain-invariant classifier called Easy Domain Adaptation (EasyDA) based on multi-view emotion inputs (multiple modalities or multiple types of features). Firstly, EasyDA uses both the source domain (training subjects) and the target domain (test subject) to generate domain generalization features for each view by leveraging a fast, accurate, and low-memory approximate empirical kernel map (AEKM), followed by a parameterless weighted combination for multi-views. Secondly, EasyDA simultaneously learns an optimal separating hyperplane and the pseudo labels for the target domain such that (a) high classification accuracies are obtained on both labeled source domain data and the pseudo-labeled target domain data; (b) the distribution distance between source and target domain is reduced; (c) the predicted output vector in the target domain changes little over time in short time intervals, based on the biological evidence that emotion varies fluently and smoothly. Eventually, by summarizing these two steps with the ridge regression theory and alternating optimization, EasyDA can transfer knowledge across domains accurately, efficiently, and easily in a unified framework. Experimental results on the SEED and SEED-IV datasets demonstrate that EasyDA significantly outperforms multiple representative domain adaptation methods in terms of accuracy, computational time, and memory consumption. It is noteworthy that EasyDA achieves satisfactory performance under a wide range of parameter settings.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call