Abstract

Music is a kind of art which is to express the thoughts and emotion and reflect the reality life by organized sounds; every piece of music expresses emotions through lyrics and melodies. Human emotions are rich and colorful, and there are also differences in music. It is unreasonable for a song to correspond to only one emotional feature. According to the results of existing algorithms, some music has obviously wrong category labels. To solve the above problems, we established a two-layer feature extraction model based on a spectrogram from shallow to deep (from primary feature extraction to deep feature extraction), which can not only extract the most basic musical features but also dig out the deep emotional features. And further classify the features with the improved CRNN neural network, and get the final music emotion category. Through a large number of comparative experiments, it is proven that our model is suitable for a music classification task.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call