Abstract

Abstract When detecting of emotions from music, many features are extracted from the original music data. However, there are redundant or irrelevant features, which will reduce the performance of classification models. Considering the feature problems, we propose an embedded feature selection method, called Multi-label Embedded Feature Selection (MEFS), to improve classification performance by selecting features. MEFS embeds classifier and considers the label correlation. Other three representative multi-label feature selection methods, known as LP-Chi, max and avg, together with four multi-label classification algorithms, is included for performance comparison. Experimental results show that the performance of our MEFS algorithm is superior to those filter methods in the music emotion dataset.

Highlights

  • In daily life, music plays an important role

  • As we experience in our daily life, more than one emotion may be evoked by music simultaneously

  • To select the classifier specified features without the high time cost like wrapper methods, we propose a tradeoff method by introducing an embedded feature selection method into multi-label classification

Read more

Summary

Introduction

It influences people emotional by nature, makes people feel happy or sad, angry or relaxed. The problem of automatically categorizing music into emotions was modeled as single-label classification[1,2] or regression[3]. As we experience in our daily life, more than one emotion may be evoked by music simultaneously. In this case, classification and regression with singlelabel can hardly model the multiplicity in music emotion studies. Multi-label approaches are more appropriate in modeling music emotions[4,5]

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call