Abstract

Creating symbolic melodies with machine learning is challenging because it requires an understanding of musical structure and the handling of inter-dependencies and long-term dependencies. Learning the relationship between events that occur far apart in time in music poses a considerable challenge for machine learning models. Another notable feature of music is that notes must account for several inter-dependencies, including melodic, harmonic, and rhythmic aspects. Baseline methods, such as RNNs, LSTMs, and GRUs, often struggle to capture these dependencies, resulting in the generation of musically incoherent or repetitive melodies. As such, in this study, a hierarchical multi-head attention LSTM model is proposed for creating polyphonic symbolic melodies. This enables our model to generate more complex and expressive melodies than previous methods, while still being musically coherent. The model allows learning of long-term dependencies at different levels of abstraction, while retaining the ability to form inter-dependencies. The study has been conducted on two major symbolic music datasets, MAESTRO and Classical-Music MIDI, which feature musical content encoded on MIDI. The artistic nature of music poses a challenge to evaluating the generated content and qualitative analysis are often not enough. Thus, human listening tests are conducted to strengthen the evaluation. Qualitative analysis conducted on the generated melodies shows significantly improved loss scores on MSE over baseline methods, and is able to generate melodies that were both musically coherent and expressive. The listening tests conducted using Likert-scale support the qualitative results and provide better statistical scores over baseline methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call