Abstract

Music can be regarded as an art of expressing inner feelings. However, most of the existing networks for music generation ignore the analysis of its emotional expression. In this paper, we propose to synthesise music according to the specified emotion, and also integrate the internal structural characteristics of music into the generation process. Specifically, we embed the emotional labels along with music structure features as the conditional input and then investigate the GRU network for generating emotional music. In addition to the generator, we also design a novel perceptually optimised emotion classification model which aims for promoting the generated music close to the emotion expression of real music. In order to validate the effectiveness of the proposed framework, both the subjective and objective experiments are conducted to verify that our method can produce emotional music correlated to the specified emotion and music structures.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call