Abstract

The two commonly accepted models of affect used in affective computing are categorical and two-dimensional. However, categorical models are limited to datasets that only contain music for which human annotators fully agree upon, while two-dimensional models use descriptors to which users may not relate to (e.g., Valence and Arousal). This paper explores the hypothesis that the music emotion problem is circular, and shows how circular models can be used for automatic music emotion recognition. This hypothesis is tested through experiments on the two commonly accepted models of affect, as well as on an original circular model proposed by the authors. First, an original dataset was assembled and annotated as a way to investigate agreement among annotators. Then, polygonal approximations of circular regression are proposed as a practical method to investigate whether the circularity of the annotations can be exploited. Experiments with different polygons demonstrate consistent improvements over the categorical model on a dataset containing musical extracts for which the human annotators did not fully agree upon. Finally, a proposed multi-tagging strategy based on the circular predictions is put forward as a pragmatic method to automatically annotate music based on the circular models.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.