Abstract

ObjectiveDesigning a portable Brain-Computer Interface (aBCI) using EEG signals is challenging due to the numerous channels, though not all are vital for emotional recognition. We aimed to simplify this by creating a two-channel portable aBCI using advanced time-frequency analysis and deep learning. MethodOur approach involved utilizing the time-frequency analysis named synchrosqueezing wavelet transform (SSWT), which provides better frequency resolution for fluctuations of EEG signal than common wavelet transform. Using the ResNet-18 Convolutional Neural Network, we fine-tuned for sadness and happiness classification. The two best channels were identified across four databases: SEED-IV, SEED-V, SEED-GER, and SEED-FRA, using the Leave-One-Subject-Out method. ResultsFinally, we achieved an average accuracy over sad and happy emotions using the SSWT-ResNet18 model of 76.66%, 78.12%, 81.25%, and 75.00% for the SEED-IV, SEED-V, SEED-GER, and SEED-FRA databases, respectively. ConclusionOverall, our study demonstrates the potential for developing a rapid aBCI by utilizing a precise time–frequency method and deep learning technique from the least number of channels. SignificanceOur approach has promising implications for future real-world applications in emotional recognition.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call