Abstract

Abstract People’s judgment of music emotion is highly subjective; how to quantify the music emotion characteristics is the key to solving the music emotion recognition problem. This paper utilizes the Fourier transform method to preprocess the input music sample signal. A digital filter accomplishes the pre-emphasis operation, and the number of frames in the music signal is determined by splitting and windowing through a convolution operation. By utilizing the Mel frequency cepstrum coefficient and cochlear frequency, emotional features of music can be extracted. Improve the multimodal model based on the RCNN algorithm, propose the TWC music emotion framework, and construct a music emotion recognition model that incorporates the improved multimodal RCNN. The proposed model’s impact on music emotion appreciation is evaluated through experiments to identify music emotions and an analysis of college music teaching practices that emphasize emotion appreciation. The results show that 1376 songs belonging to the category of “relaxation” are assigned to the category of “healing”, which is only 4 songs short of the target, and the labeling of the songs is not homogeneous, and the emotional recognition of the model is consistent with the cognition. The mean value of the empathy ability of college students in music emotion appreciation is 69.13, which is in the middle-upper level, indicating that the model proposed in this paper has a good effect on the cultivation of students’ music emotion appreciation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call