Abstract

The music performance system works by identifying the emotional elements of music to control the lighting changes. However, if there is a recognition error, a good stage effect will not be able to create. Therefore, this paper proposes an intelligent music emotion recognition and classification algorithm in the music performance system. The first part of the algorithm is to analyze the emotional features of music, including acoustic features, melody features, and audio features. Then, the three kinds of features are combined together to form a feature vector set. In the latter part of the algorithm, it divides the feature vector set into training samples and test samples. The training samples are trained by using recognition and classification model based on the neural network. And then, the testing samples are input into the trained model, which is aiming to realize the intelligent recognition and classification of music emotion. The result shows that the kappa coefficient k values calculated by the proposed algorithm are greater than 0.75, which indicates that the recognition and classification results are consistent with the actual results, and the accuracy of recognition and classification is high. So, the research purpose is achieved.

Highlights

  • Watching entertainment programs has become one of the main leisure activities in our daily lives

  • After training of the model based on the BP neural network, the music emotion classification can be realized by inputting test music samples

  • In the formula, k1, k2, k3, k4, and k5 represent the kappa coefficients of happiness, sadness, tenderness, anger, and fear, respectively. e kappa coefficient K values calculated by this method are all greater than 0.75, which indicates that the recognition and classification results are in agreement with the actual results, that is, the application results of algorithm are close to the actual results, and the recognition and classification accuracy is high. en, the research purpose is achieved

Read more

Summary

Introduction

Watching entertainment programs has become one of the main leisure activities in our daily lives. Combined with the experience in previous researches, in order to improve the accuracy of recognition and classification, this study extracts the emotional features contained in music from multiple aspects and constructs a multifeature space vector. The classification of music emotion is achieved by using the constructed recognition and classification model, which helps the completion of the control of lighting in the music performance system. E frequency domain characteristics of audio include two parts: spectrum centroid Rt and spectrum flux Ft. e calculation formula is as follows: spectrum centroid. In the formula, U1 represents acoustic features; U2 represents the characteristics of melody; U3 represents audio features; S1 represents the speed of speech; S2represents the pitch; S3 represents strength; S4represents sound quality; and S5 stands for pronunciation. After training of the model based on the BP neural network, the music emotion classification can be realized by inputting test music samples

Example Analysis
Emotional style Joy Grief Gentleness Indignation Fear
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call