Abstract

In the present paper, a new information fusion approach based on 3-channel forehead biosignals (from left temporalis, frontalis, and right temporalis muscles) and electrocardiogram is adopted to classify music-induced emotions in arousal-valence space. The fusion strategy is a combination of feature-level fusion and naive-Bayes decision-level fusion. Optimal feature subsets were derived by using a consistency-based feature evaluation index and sequential forward floating selection technique. An average classification accuracy of 89.24% was achieved, corresponding to valence classification accuracy of 94.86% and average arousal classification accuracy of 94.06%, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call