Abstract

Composing music is a very interesting challenge that tests the composer’s creative capacity, whether it a human or a computer. Although there have been many arguments on the matter, almost all of music is some regurgitation or alteration of a sonic idea created before. Thus, with enough data and the correct algorithm, deep learning should be able to make music that would sound human. This report outlines various approaches to music composition through Neural Network model, it is evident that musical ideas can be gleaned from these algorithms in hopes of making a new piece of music. The use of deep learning to solve problems in literary arts has been a recent trend that has gained a lot of attention and automated generation of music has been an active area. This project deals with the generation of music using some form of music notation relying on various LSTM (Long Short Term Memory) architectures. Fully connected and convolutional layers are used along with LSTM's to capture rich features in the frequency domain and increase the quality of music generated. The work is focused on unconstrained music generation and uses no information about musical structure such as notes or chords to aid learning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call