Abstract

A large number of music platforms have appeared on the Internet recently. The deep learning framework for music recommendation is still very limited when it comes to accurately identifying the emotional type of music and recommending it to users. Languages, musical styles, thematic scenes, and the ages to which they belong are all common classifications. And this is far from sufficient, posing difficulties in music classification and identification. As a result, this paper uses the methods of music emotion multi-feature extraction, BiGRU model design, and music theme scene classification model design to improve the accuracy of music emotion recognition. It develops the BiGRU emotion recognition model and compares it to other models. BiGRU can correctly identify happy and sad emotion music up to 79 percent and 81.01 percent of the time, respectively. It goes far beyond Rnet-LSTM, and the greater the difference in emotion categories, the easier it is to analyze the feature sequence containing emotional features and the higher the recognition accuracy. This is especially evident in the accuracy with which happiness and sadness are recognized. It can meet users' needs for music recognition in a variety of settings.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call