Abstract

At present, most methods to improve the accuracy of emotion recognition based on electroencephalogram (EEG) are achieved by means of increasing the number of channels and feature types. This is to use the big data to train the classification model but it also increases the code complexity and consumes a large amount of computer time. We propose a method of Ant Colony Optimization with Convolutional Neural Networks and Long Short-Term Memory (ACO–CNN–LSTM) which can attain the dynamic optimal channels for lightweight data. First, transform the time-domain EEG signal to the frequency domain by Fast Fourier Transform (FFT), and the Differential Entropy (DE) of the three frequency bands (α, β and γ) are extracted as the feature data; Then, based on the DE feature dataset, ACO is employed to plan the path where the electrodes are located in the brain map. The classification accuracy of CNN-LSTM is used as the objective function for path determination, and the electrodes on the optimal path are used as the optimal channels; Next, the initial learning rate and batchsize parameters are exactly matched the data characteristics, which can obtain the best initial learning rate and batchsize; Finally, the SJTU Emotion EEG Dataset (SEED) dataset is used for emotion recognition based on the ACO–CNN–LSTM. From the experimental results, it can be seen that: the average accuracy of three-classification (positive, neutral, negative) can achieve 96.59 %, which is based on the lightweight data by means of ACO–CNN–LSTM proposed in the paper. Meanwhile, the computer time consumed is reduced. The computational efficiency is increased by 15.85 % compared with the traditional CNN-LSTM method. The accuracy can achieve more than 90 % when the data volume is reduced to 50 %. In summary, the proposed method of ACO–CNN–LSTM in the paper can get higher efficiency and accuracy.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.