Abstract

In recent years, the explosive growth of online music resources makes it difficult to retrieve and manage music information. To efficiently retrieve and classify music information has become a hot research topic. Thayer’s two-dimensional emotion plane is selected as the basis for establishing the music emotion database. Music is divided into five categories, the concept of continuous emotion perception is introduced, and music emotion is regarded as a point on a two-dimensional emotional plane, together with the two sentiment variables to determine its location. The artificial labeling method is used to determine the position range of the five types of emotions on the emotional plane, and the regression method is used to obtain the relationship between the VA value and the music features so that the music emotion classification problem is transformed into a regression problem. A regression-based music emotion classification system is designed and implemented, which mainly includes a training part and a testing part. In the training part, three algorithms, namely, polynomial regression, support vector regression, and k-plane piecewise regression, are used to obtain the regression model. In the test part, the input music data is regressed and predicted to obtain its VA value and then classified, and the system performance is considered by classification accuracy. Results show that the combined method of support vector regression and k-plane piecewise regression improves the accuracy by 3 to 4 percentage points compared to using one algorithm alone; compared with the traditional classification method based on a support vector machine, the accuracy improves by 6 percentage points. Music emotion is classified by algorithms such as support vector machine classification, K-neighborhood classification, fuzzy neural network classification, fuzzy K-neighborhood classification, Bayesian classification, and Fisher linear discrimination, among which the support vector machine, fuzzy K-neighborhood, and the accuracy rate of music emotion classification realized by Fisher linear discriminant algorithm are more than 80%; a new algorithm “mixed classifier” is proposed, and the music emotion recognition rate based on this algorithm reaches 84.9%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call