Abstract

Affective computing has suffered by the precise annotation because the emotions are highly subjective and vague. The music video emotion is complex due to the diverse textual, acoustic, and visual information which can take the form of lyrics, singer voice, sounds from the different instruments, and visual representations. This can be one reason why there has been a limited study in this domain and no standard dataset has been produced before now. In this study, we proposed an unsupervised method for music video emotion analysis using music video contents on the Internet. We also produced a labelled dataset and compared the supervised and unsupervised methods for emotion classification. The music and video information are processed through a multimodal architecture with audio–video information exchange and boosting method. The general 2D and 3D convolution networks compared with the slow–fast network with filter and channel separable convolution in multimodal architecture. Several supervised and unsupervised networks were trained in an end-to-end manner and results were evaluated using various evaluation metrics. The proposed method used a large dataset for unsupervised emotion classification and interpreted the results quantitatively and qualitatively in the music video that had never been applied in the past. The result shows a large increment in classification score using unsupervised features and information sharing techniques on audio and video network. Our best classifier attained 77% accuracy, an f1-score of 0.77, and an area under the curve score of 0.94 with minimum computational cost.

Highlights

  • Affective computing has suffered by the precise annotation because the emotions are highly subjective and vague

  • The unsupervised features of music video emotion can be useful in the initial phase of training, but human performance data is required for truly reliable evaluation

  • The evaluation metrics used in this experiment are accuracy, F1-score, and the area under the receiver operating characteristic curve (ROC-area under the curve (AUC)) scores

Read more

Summary

Introduction

Affective computing has suffered by the precise annotation because the emotions are highly subjective and vague. Possible approach to address this complexity when predicting emotion in a music video is to separately analyze the audio and visual information, and integrate the results. We propose an unsupervised multimodal method for music video emotion classification.

Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call