Abstract

Music, as an integral component of culture, holds a prominent position and is widely accessible. There has been growing interest in studying sentiment represented by music and its emotional effects on its audiences, however, much of the existing literature is subjective and overlooks the impact of music on the real-time expression of emotion. In this article, two labeled datasets for music sentiment classification and multimodal sentiment classification were developed. Deep learning is used to classify music sentiment, while decision-level fusion is used to classify the multimodal sentiment of real-time listeners. We combine sentiment analysis with a conventional online music playback system and propose an innovative human-music emotional interaction system based on multimodal sentiment analysis and deep learning. It has been demonstrated through individual observation and questionnaire studies that the interaction between human and musical sentiments has a positive impact on the negative emotions of listeners.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call