Abstract

Creating aesthetically pleasing music via algorithmic composition has continually been an ambitious goal of music research. Memory-based neural networks have shown to be particularly suited for this type of sequential learning. Music scores data is commonly used to represent different music features–such as durations and pitches–which when combined, make up the entirety of a music piece. As more music features are integrated into the music composition process, the space of labels required to represent possible feature combinations in a neural network grows significantly and rather quickly making the process computationally challenging, to say the least. This consideration bears special importance in situations with polyphonic pieces, where additional features such as harmonies and multiple voices are present.
 This research highlights the potential benefits of feature separation in music composition from music scores data. More specifically, we demonstrate the effectiveness of neural networks for automated music composition by learning music features separately; we start by creating separate simple models, one for each desired music feature, and then combine results from the simple models to compose new music. This is in contrast to the common practice of employing a single complex model trained over multiple features simultaneously. Case study evaluation results show significant time savings for our proposed approach with similar music “quality” compared to the complex model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call