Abstract

Music emotion information is widely used in music information retrieval, music recommendation, music therapy, and so forth. In the field of music emotion recognition (MER), computer scientists extract musical features to identify musical emotions, but this method ignores listeners’ individual differences. Applying machine learning methods, this study formed relations among audio features, individual factors, and music emotions. We used audio features and individual features as inputs to predict the perceived emotion and felt emotion of music, respectively. The results show that real-time individual features (e.g., preference for target music and mechanism indices) can significantly improve the model’s effect, and stable individual features (e.g., sex, music experience, and personality) have no effect. Compared with the recognition models of perceived emotions, the individual features have greater effects on the recognition models of felt emotions.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call