Abstract

Most music messages express emotional signals. The emotional classification of music can organize and retrieve music more conveniently. Music emotion recognition refers to the music emotion detected from people's annotations. Considering the low music emotion recognition rate of traditional music emotion recognition models, a music emotion recognition model based on deep learning is proposed in this paper. It uses deep learning to analyze the dynamic emotion recognition of music and adopts the VA model to generate a set of VA values for a song. Specifically, the related background and emotion classification model of music emotion recognition is first elaborated. Then, the representation of emotion is explained. Next, the overall framework of dynamic music emotion recognition is designed. Finally, the convolutional long- and short-time neural network and BiLSTM are combined to perform music dynamic VA, and this method is compared with other related identification methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.