Abstract
The purpose of this study was to investigate the feasibility of using forehead biosignals as informative channels for classification of music-induced emotions. Classification of four emotional states in Arousal-Valence space was performed by employing two parallel support vector machines as arousal and valence classifiers. Relative powers of EEG sub-bands, spectral entropy, mean power frequency, and higher order crossings were extracted from each of the three forehead data channels: left Temporalis, Frontalis, and right Temporalis. The inputs of the classifiers were obtained by a feature selection algorithm based on a fuzzy-rough model. The averaged subject-independent classification accuracy of 93.80%, 92.43%, and 86.67% for arousal classification, valence classification, and classification of four emotional states in Arousal-Valence space, respectively, is achieved.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.