Abstract
Research on affective computing by physiological signals has become a new and important research branch of artificial intelligence. However, most of the research focuses only on the improvement of emotion classification models while neglecting the correlation between subjective emotional labels and implicit EEG feature information. In this study, we proposed an interpretable emotion classification framework based on food images as visual stimuli and a proposed EEG dataset acquired by a portable single-electrode EEG device. Then, we construct a multi-domain feature extraction method based on sliding time windows, which can effectively extract single-channel EEG features. Finally, we chose the extreme gradient boosting (XGBoost) model as our classifier in the proposed classification framework. The optimal accuracy of our single-channel EEG classification model was 94.76%, which reached the classification performance of multi-channel EEG classification. Based on XGBoost, we interpret the global and the local emotion feature selection method by calculating Shapley additive explanation (SHAP) values. Through the analysis of interpretable methods, we can not only obtain the contribution degree of features from three types of emotional labels (positive, negative, and neutral), but also know whether features have a positive or negative effect on classification results. The source codes of this work are publicly available at:https://github.com/VCMHE/MDF-EEG.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.