Abstract

Emotion recognition based on EEG (electroencephalogram) is one of the keys to improve communication between doctors and patients, which has attracted much more attention in recent years. While the traditional algorithms are generally based on using the original EEG sequence signal as input, they neglect the bad influence of noise that is difficult to remove and the great importance of shallow features for the recognition process. As a result, there is a difficulty in recognizing and analyzing emotions, as well as a stability error in traditional algorithms. To solve this problem, in this paper, a new method of EEG emotion recognition based on 1D-DenseNet is proposed. Firstly, we extract the band energy and sample entropy of EEG signal to form a 1D vector instead of the original sequence signal to reduce noise interference. Secondly, we construct a 1D-Densenet model, which takes the above-mentioned 1D vector as the input, and then connects the shallow manual features of the input layer and the output of each convolution layer as the input of the next convolution layer. This model increases the influence proportion of shallow features and has good performance. To verify the effectiveness of this method, the MAHNOB-HCI and DEAP datasets are used for analysis and the average accuracy of emotion recognition reaches 90.02% and 93.51% respectively. To compare with the current research results, the new method proposed in this paper has better classification effect. Simple preprocessing and high recognition accuracy make it easy to be applied to real medical research.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.