Abstract

Emotion is one of the most crucial attributes of music. However, due to the scarcity of emotional music datasets, emotion-conditioned symbolic music generation using deep learning techniques has not been investigated in depth. In particular, no study explores conditional music generation with the guidance of emotion, and few studies adopt time-varying emotional conditions. To address these issues, first, we endow three public lead sheet datasets with fine-grained emotions by automatically computing the valence labels from the chord progressions. Second, we propose a novel and effective encoder-decoder architecture named EmoMusicTV to explore the impact of emotional conditions on multiple music generation tasks and to capture the rich variability of musical sequences. EmoMusicTV is a transformer-based variational autoencoder (VAE) that contains a hierarchical latent variable structure to model holistic properties of the music segments and short-term variations within bars. The piece-level and bar-level emotional labels are embedded in their corresponding latent spaces to guide music generation. Third, we pretrain EmoMusicTV with the lead sheet continuation task to further improve its performance on conditional melody or harmony generation. Experimental results demonstrate that EmoMusicTV outperforms previous methods on three tasks, i.e., melody harmonization, melody generation given harmony, and lead sheet generation. Ablation studies verify the significant roles of emotional conditions and hierarchical latent variable structure on conditional music generation. Human listening shows that the lead sheets generated by EmoMusicTV are closer to the ground truth (GT) and perform slightly worse than the GT in conveying emotional polarity.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call