Research on affective computing by physiological signals has become a new and important research branch of artificial intelligence. However, most of the research focuses only on the improvement of emotion classification models while neglecting the correlation between subjective emotional labels and implicit EEG feature information. In this study, we proposed an interpretable emotion classification framework based on food images as visual stimuli and a proposed EEG dataset acquired by a portable single-electrode EEG device. Then, we construct a multi-domain feature extraction method based on sliding time windows, which can effectively extract single-channel EEG features. Finally, we chose the extreme gradient boosting (XGBoost) model as our classifier in the proposed classification framework. The optimal accuracy of our single-channel EEG classification model was 94.76%, which reached the classification performance of multi-channel EEG classification. Based on XGBoost, we interpret the global and the local emotion feature selection method by calculating Shapley additive explanation (SHAP) values. Through the analysis of interpretable methods, we can not only obtain the contribution degree of features from three types of emotional labels (positive, negative, and neutral), but also know whether features have a positive or negative effect on classification results. The source codes of this work are publicly available at:https://github.com/VCMHE/MDF-EEG.
Read full abstract