Abstract

Automatic music generation has become a more exciting research topic. However, the existing music generation methods tend to utilize music data inherently and hardly consider music generation from the perspective of music composition theory. Therefore, how to use music theory to guide the automatic generation of music is drawing increasing attention to this area. In this work, we aim to extract music rules from given corpora and then apply them to generate new music of a similar style. We divided the melody into different scales (music segments with different numbers of notes, not set of notes ordered by pitch) based on the music structures and then employed Latent Dirichlet Allocation (LDA) topic model to learn the structural constraints of the provided musical form. The multi-scale fusion of musical features through reinforcement learning (RL) enables the model to consider the music generation from the global scope. Our experimental results show that our model is superior to the baseline model according to objective and subjective rating. The music generated from our model has better consistency in terms of music style, which indicates that the extracted structural features and the multi-scale modeling are promising for music generation of a certain style or topic.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call