Abstract

Advancement in deep neural networks have made it possible to compose music that mimics music composition by humans. The capacity of deep learning architectures in learning musical style from arbitrary musical corpora have been explored in this paper. The paper proposes a method for generated from the estimated distribution. Musical chords have been extracted for various instruments to train a sequential model to generate the polyphonic music on some selected instruments. We demonstrate a simple method comprising a sequential LSTM models to generate polyphonic music. The results of the model evaluation show that generated music is pleasant to hear and is similar to music played by humans. This has great application in entertainment industry which enables music composers to generate variety of creative music.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call