Abstract

In recent years, research in the field of human-computer interaction (HCI) has focused on strengthening machine functions in recognizing and understanding human emotions. Emotion recognition can be done in several ways, among others, through sounds, facial expressions, or a combination of both. The different sounds and facial expressions from different races and nations cause less accurate in the reading of emotions using these methods. Another method for recognizing emotions can be done by analysing the data from an electroencephalograph (EEG). The EEG signals from the human brain are the result of various activities carried out. One of them is emotion. The EEG signal used in this study came from the DEAP dataset. This dataset consists of 32 files, each of which contains 40 EEG recordings. The emotions from this dataset are classified based on the dimensions of arousal and valence. The signal was then decomposed into three different frequency groups (alpha, beta, and gamma) through band-pass filtering. After that, the principal component analysis (PCA) and resampling were carried out. The classification processes used a number of methods of machine learning. The result was known that the performance of K-star was the highest while naïve Bayes was the lowest. The accuracies of K-star in arousal and valence classification were 81.2, 82.6, respectively. The naive Bayes got 51.2 for the arousal, and 52.5 for the valence.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.